WorldWideScience

Sample records for chl improve predictions

  1. Prediction of Chl-a concentrations in an eutrophic lake using ANN models with hybrid inputs

    Science.gov (United States)

    Aksoy, A.; Yuzugullu, O.

    2017-12-01

    Chlorophyll-a (Chl-a) concentrations in water bodies exhibit both spatial and temporal variations. As a result, frequent sampling is required with higher number of samples. This motivates the use of remote sensing as a monitoring tool. Yet, prediction performances of models that convert radiance values into Chl-a concentrations can be poor in shallow lakes. In this study, Chl-a concentrations in Lake Eymir, a shallow eutrophic lake in Ankara (Turkey), are determined using artificial neural network (ANN) models that use hybrid inputs composed of water quality and meteorological data as well as remotely sensed radiance values to improve prediction performance. Following a screening based on multi-collinearity and principal component analysis (PCA), dissolved-oxygen concentration (DO), pH, turbidity, and humidity were selected among several parameters as the constituents of the hybrid input dataset. Radiance values were obtained from QuickBird-2 satellite. Conversion of the hybrid input into Chl-a concentrations were studied for two different periods in the lake. ANN models were successful in predicting Chl-a concentrations. Yet, prediction performance declined for low Chl-a concentrations in the lake. In general, models with hybrid inputs were superior over the ones that solely used remotely sensed data.

  2. Can a Satellite-Derived Estimate of the Fraction of PAR Absorbed by Chlorophyll (FAPAR(sub chl)) Improve Predictions of Light-Use Efficiency and Ecosystem Photosynthesis for a Boreal Aspen Forest?

    Science.gov (United States)

    Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew

    2009-01-01

    Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.

  3. Improvement of the Uranium Sequestration Ability of a Chlamydomonas sp. (ChlSP Strain) Isolated From Extreme Uranium Mine Tailings Through Selection for Potential Bioremediation Application

    Science.gov (United States)

    Baselga-Cervera, Beatriz; Romero-López, Julia; García-Balboa, Camino; Costas, Eduardo; López-Rodas, Victoria

    2018-01-01

    The extraction and processing of uranium (U) have polluted large areas worldwide, rendering anthropogenic extreme environments inhospitable to most species. Noticeably, these sites are of great interest for taxonomical and applied bioprospection of extremotolerant species successfully adapted to U tailings contamination. As an example, in this work we have studied a microalgae species that inhabits extreme U tailings ponds at the Saelices mining site (Salamanca, Spain), characterized as acidic (pH between 3 and 4), radioactive (around 4 μSv h−1) and contaminated with metals, mainly U (from 25 to 48 mg L−1) and zinc (from 17 to 87 mg L−1). After isolation of the extremotolerant ChlSP strain, morphological characterization and internal transcribed spacer (ITS)-5.8S gene sequences placed it in the Chlamydomonadaceae, but BLAST analyses identity values, against the nucleotide datasets at the NCBI database, were very low (microalgae growth curve; ChlSG cells removed close to 4 mg L−1 of U in 24 days. These findings open up promising prospects for sustainable management of U tailings waters based on newly evolved extremotolerants and outline the potential of artificial selection in the improvement of desired features in microalgae by experimental adaptation and selection. PMID:29662476

  4. Improvement of the Uranium Sequestration Ability of a Chlamydomonas sp. (ChlSP Strain) Isolated From Extreme Uranium Mine Tailings Through Selection for Potential Bioremediation Application.

    Science.gov (United States)

    Baselga-Cervera, Beatriz; Romero-López, Julia; García-Balboa, Camino; Costas, Eduardo; López-Rodas, Victoria

    2018-01-01

    The extraction and processing of uranium (U) have polluted large areas worldwide, rendering anthropogenic extreme environments inhospitable to most species. Noticeably, these sites are of great interest for taxonomical and applied bioprospection of extremotolerant species successfully adapted to U tailings contamination. As an example, in this work we have studied a microalgae species that inhabits extreme U tailings ponds at the Saelices mining site (Salamanca, Spain), characterized as acidic (pH between 3 and 4), radioactive (around 4 μSv h -1 ) and contaminated with metals, mainly U (from 25 to 48 mg L -1 ) and zinc (from 17 to 87 mg L -1 ). After isolation of the extremotolerant ChlSP strain, morphological characterization and internal transcribed spacer (ITS)-5.8S gene sequences placed it in the Chlamydomonadaceae , but BLAST analyses identity values, against the nucleotide datasets at the NCBI database, were very low (tailings waters based on newly evolved extremotolerants and outline the potential of artificial selection in the improvement of desired features in microalgae by experimental adaptation and selection.

  5. Dualities in CHL-models

    Science.gov (United States)

    Persson, Daniel; Volpato, Roberto

    2018-04-01

    We define a very general class of CHL-models associated with any string theory S (bosonic or supersymmetric) compactified on an internal CFT C× Td . We take the orbifold by a pair (g, δ) , where g is a (possibly non-geometric) symmetry of C and δ is a translation along T n . We analyze the T-dualities of these models and show that in general they contain Atkin–Lehner type symmetries. This generalizes our previous work on N=4 CHL-models based on heterotic string theory on T 6 or type II on K3× T2 , as well as the ‘monstrous’ CHL-models based on a compactification of heterotic string theory on the Frenkel–Lepowsky–Meurman CFT V\

  6. Light Quality Affects Chloroplast Electron Transport Rates Estimated from Chl Fluorescence Measurements.

    Science.gov (United States)

    Evans, John R; Morgan, Patrick B; von Caemmerer, Susanne

    2017-10-01

    Chl fluorescence has been used widely to calculate photosynthetic electron transport rates. Portable photosynthesis instruments allow for combined measurements of gas exchange and Chl fluorescence. We analyzed the influence of spectral quality of actinic light on Chl fluorescence and the calculated electron transport rate, and compared this with photosynthetic rates measured by gas exchange in the absence of photorespiration. In blue actinic light, the electron transport rate calculated from Chl fluorescence overestimated the true rate by nearly a factor of two, whereas there was closer agreement under red light. This was consistent with the prediction made with a multilayer leaf model using profiles of light absorption and photosynthetic capacity. Caution is needed when interpreting combined measurements of Chl fluorescence and gas exchange, such as the calculation of CO2 partial pressure in leaf chloroplasts. © Crown copyright 2017.

  7. The CEBAF control system for the CHL

    International Nuclear Information System (INIS)

    Keesee, M.S.; Bevins, B.S.

    1996-01-01

    The CEBAF Central Helium Liquefier (CHL) control system consists of independent safety controls located at each subsystem, CAMAC computer interface hardware, and a CEBAF-designed control software called Thaumaturgic Automated Control Logic (TACL). The paper describes how control software was interfaced with the subsystems of the CHL. Topics of configuration, editing, operator interface, datalogging, and internal logic functions are presented as they relate to the operational needs of the helium plant. The paper also describes the effort underway to convert from TACL to the Experimental Physics and Industrial Control System (EPICS), the new control system for the CEBAF accelerator. This software change will require customizing EPICS software to cryogenic process control

  8. CHL1 is involved in human breast tumorigenesis and progression

    Energy Technology Data Exchange (ETDEWEB)

    He, Li-Hong [Medical Department of Breast Oncology, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Key Laboratory of Breast Cancer Prevention and Treatment of the Ministry of Education, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Ma, Qin [Department of Oncology, The General Hospital of Tianjin Medical University, Tianjin (China); Shi, Ye-Hui [Medical Department of Breast Oncology, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Key Laboratory of Breast Cancer Prevention and Treatment of the Ministry of Education, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Ge, Jie; Zhao, Hong-Meng [Key Laboratory of Breast Cancer Prevention and Treatment of the Ministry of Education, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Breast Surgery, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Li, Shu-Fen [Medical Department of Breast Oncology, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Key Laboratory of Breast Cancer Prevention and Treatment of the Ministry of Education, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Tong, Zhong-Sheng, E-mail: 83352162@qq.com [Medical Department of Breast Oncology, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China); Key Laboratory of Breast Cancer Prevention and Treatment of the Ministry of Education, Tianjin Medical University Cancer Institute and Hospital, Tianjin (China)

    2013-08-23

    Highlights: •CHL1 is down-regulation in breast cancer tissues. •Down-regulation of CHL1 is related to high grade. •Overexpression of CHL1 inhibits breast cancer cell proliferation and invasion in vitro. •CHL1 deficiency induces breast cancer cell proliferation and invasion both in vitro and in vivo. -- Abstract: Neural cell adhesion molecules (CAM) play important roles in the development and regeneration of the nervous system. The L1 family of CAMs is comprised of L1, Close Homolog of L1 (CHL1, L1CAM2), NrCAM, and Neurofascin, which are structurally related trans-membrane proteins in vertebrates. Although the L1CAM has been demonstrated play important role in carcinogenesis and progression, the function of CHL1 in human breast cancer is limited. Here, we found that CHL1 is down-regulated in human breast cancer and related to lower grade. Furthermore, overexpression of CHL1 suppresses proliferation and invasion in MDA-MB-231 cells and knockdown of CHL1 expression results in increased proliferation and invasion in MCF7 cells in vitro. Finally, CHL1 deficiency promotes tumor formation in vivo. Our results may provide a strategy for blocking breast carcinogenesis and progression.

  9. CHL1 is involved in human breast tumorigenesis and progression

    International Nuclear Information System (INIS)

    He, Li-Hong; Ma, Qin; Shi, Ye-Hui; Ge, Jie; Zhao, Hong-Meng; Li, Shu-Fen; Tong, Zhong-Sheng

    2013-01-01

    Highlights: •CHL1 is down-regulation in breast cancer tissues. •Down-regulation of CHL1 is related to high grade. •Overexpression of CHL1 inhibits breast cancer cell proliferation and invasion in vitro. •CHL1 deficiency induces breast cancer cell proliferation and invasion both in vitro and in vivo. -- Abstract: Neural cell adhesion molecules (CAM) play important roles in the development and regeneration of the nervous system. The L1 family of CAMs is comprised of L1, Close Homolog of L1 (CHL1, L1CAM2), NrCAM, and Neurofascin, which are structurally related trans-membrane proteins in vertebrates. Although the L1CAM has been demonstrated play important role in carcinogenesis and progression, the function of CHL1 in human breast cancer is limited. Here, we found that CHL1 is down-regulated in human breast cancer and related to lower grade. Furthermore, overexpression of CHL1 suppresses proliferation and invasion in MDA-MB-231 cells and knockdown of CHL1 expression results in increased proliferation and invasion in MCF7 cells in vitro. Finally, CHL1 deficiency promotes tumor formation in vivo. Our results may provide a strategy for blocking breast carcinogenesis and progression

  10. Metabolic engineering of the Chl d-dominated cyanobacterium Acaryochloris marina: production of a novel Chl species by the introduction of the chlorophyllide a oxygenase gene.

    Science.gov (United States)

    Tsuchiya, Tohru; Mizoguchi, Tadashi; Akimoto, Seiji; Tomo, Tatsuya; Tamiaki, Hitoshi; Mimuro, Mamoru

    2012-03-01

    In oxygenic photosynthetic organisms, the properties of photosynthetic reaction systems primarily depend on the Chl species used. Acquisition of new Chl species with unique optical properties may have enabled photosynthetic organisms to adapt to various light environments. The artificial production of a new Chl species in an existing photosynthetic organism by metabolic engineering provides a model system to investigate how an organism responds to a newly acquired pigment. In the current study, we established a transformation system for a Chl d-dominated cyanobacterium, Acaryochloris marina, for the first time. The expression vector (constructed from a broad-host-range plasmid) was introduced into A. marina by conjugal gene transfer. The introduction of a gene for chlorophyllide a oxygenase, which is responsible for Chl b biosynthesis, into A. marina resulted in a transformant that synthesized a novel Chl species instead of Chl b. The content of the novel Chl in the transformant was approximately 10% of the total Chl, but the level of Chl a, another Chl in A. marina, did not change. The chemical structure of the novel Chl was determined to be [7-formyl]-Chl d(P) by mass spectrometry and nuclear magnetic resonance spectroscopy. [7-Formyl]-Chl d(P) is hypothesized to be produced by the combined action of chlorophyllide a oxygenase and enzyme(s) involved in Chl d biosynthesis. These results demonstrate the flexibility of the Chl biosynthetic pathway for the production of novel Chl species, indicating that a new organism with a novel Chl might be discovered in the future.

  11. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  12. Lipoprotein metabolism indicators improve cardiovascular risk prediction.

    Directory of Open Access Journals (Sweden)

    Daniël B van Schalkwijk

    Full Text Available BACKGROUND: Cardiovascular disease risk increases when lipoprotein metabolism is dysfunctional. We have developed a computational model able to derive indicators of lipoprotein production, lipolysis, and uptake processes from a single lipoprotein profile measurement. This is the first study to investigate whether lipoprotein metabolism indicators can improve cardiovascular risk prediction and therapy management. METHODS AND RESULTS: We calculated lipoprotein metabolism indicators for 1981 subjects (145 cases, 1836 controls from the Framingham Heart Study offspring cohort in which NMR lipoprotein profiles were measured. We applied a statistical learning algorithm using a support vector machine to select conventional risk factors and lipoprotein metabolism indicators that contributed to predicting risk for general cardiovascular disease. Risk prediction was quantified by the change in the Area-Under-the-ROC-Curve (ΔAUC and by risk reclassification (Net Reclassification Improvement (NRI and Integrated Discrimination Improvement (IDI. Two VLDL lipoprotein metabolism indicators (VLDLE and VLDLH improved cardiovascular risk prediction. We added these indicators to a multivariate model with the best performing conventional risk markers. Our method significantly improved both CVD prediction and risk reclassification. CONCLUSIONS: Two calculated VLDL metabolism indicators significantly improved cardiovascular risk prediction. These indicators may help to reduce prescription of unnecessary cholesterol-lowering medication, reducing costs and possible side-effects. For clinical application, further validation is required.

  13. Procurement and commissioning of the CHL refrigerator at CEBAF

    International Nuclear Information System (INIS)

    Chronis, W.C.; Arenius, D.M.; Bevins, B.S.; Ganni, V.; Kashy, D.H.; Keesee, M.M.; Reid, T.R.; Wilson, J.D.

    1996-01-01

    The CEBAF Central Helium Liquefier (CHL) provides 2K refrigeration to the 338 superconducting niobium cavities in two 400 MeV linacs and one 45 MeV injector. The CHL consists of three first stage and three second stage compressors, a 4.5K cold box, a 2K cold box, liquid and gaseous helium storage, liquid nitrogen storage, and transfer lines. Figure 1 presents a block diagram of the CHL refrigerator. The system was designed to provide 4.8 kW of primary refrigeration at 2K, 12 kW of shield refrigeration at 45K for the linac cryomodules, and 10 g/s of liquid flow for the end stations. In April 1994, stable 2K operation of the previously uncommissioned cold compressors was achieved. The cold compressors are a cold vacuum pump with an inlet temperature of circa 3.0K. These compressors operate on magnetic bearing,s and therefore eliminate the possibility of contamination due to any air leaks into the system. Operational data and commissioning experience as they relate to the warm gaseous helium compressors, turbines, instrumentation and control, and the cold compressors are presented

  14. Improving Flash Flood Prediction in Multiple Environments

    Science.gov (United States)

    Broxton, P. D.; Troch, P. A.; Schaffner, M.; Unkrich, C.; Goodrich, D.; Wagener, T.; Yatheendradas, S.

    2009-12-01

    Flash flooding is a major concern in many fast responding headwater catchments . There are many efforts to model and to predict these flood events, though it is not currently possible to adequately predict the nature of flash flood events with a single model, and furthermore, many of these efforts do not even consider snow, which can, by itself, or in combination with rainfall events, cause destructive floods. The current research is aimed at broadening the applicability of flash flood modeling. Specifically, we will take a state of the art flash flood model that is designed to work with warm season precipitation in arid environments, the KINematic runoff and EROSion model (KINEROS2), and combine it with a continuous subsurface flow model and an energy balance snow model. This should improve its predictive capacity in humid environments where lateral subsurface flow significantly contributes to streamflow, and it will make possible the prediction of flooding events that involve rain-on-snow or rapid snowmelt. By modeling changes in the hydrologic state of a catchment before a flood begins, we can also better understand the factors or combination of factors that are necessary to produce large floods. Broadening the applicability of an already state of the art flash flood model, such as KINEROS2, is logical because flash floods can occur in all types of environments, and it may lead to better predictions, which are necessary to preserve life and property.

  15. Improving Earth/Prediction Models to Improve Network Processing

    Science.gov (United States)

    Wagner, G. S.

    2017-12-01

    The United States Atomic Energy Detection System (USAEDS) primaryseismic network consists of a relatively small number of arrays andthree-component stations. The relatively small number of stationsin the USAEDS primary network make it both necessary and feasibleto optimize both station and network processing.Station processing improvements include detector tuning effortsthat use Receiver Operator Characteristic (ROC) curves to helpjudiciously set acceptable Type 1 (false) vs. Type 2 (miss) errorrates. Other station processing improvements include the use ofempirical/historical observations and continuous background noisemeasurements to compute time-varying, maximum likelihood probabilityof detection thresholds.The USAEDS network processing software makes extensive use of theazimuth and slowness information provided by frequency-wavenumberanalysis at array sites, and polarization analysis at three-componentsites. Most of the improvements in USAEDS network processing aredue to improvements in the models used to predict azimuth, slowness,and probability of detection. Kriged travel-time, azimuth andslowness corrections-and associated uncertainties-are computedusing a ground truth database. Improvements in station processingand the use of improved models for azimuth, slowness, and probabilityof detection have led to significant improvements in USADES networkprocessing.

  16. Improving contact prediction along three dimensions.

    Directory of Open Access Journals (Sweden)

    Christoph Feinauer

    2014-10-01

    Full Text Available Correlation patterns in multiple sequence alignments of homologous proteins can be exploited to infer information on the three-dimensional structure of their members. The typical pipeline to address this task, which we in this paper refer to as the three dimensions of contact prediction, is to (i filter and align the raw sequence data representing the evolutionarily related proteins; (ii choose a predictive model to describe a sequence alignment; (iii infer the model parameters and interpret them in terms of structural properties, such as an accurate contact map. We show here that all three dimensions are important for overall prediction success. In particular, we show that it is possible to improve significantly along the second dimension by going beyond the pair-wise Potts models from statistical physics, which have hitherto been the focus of the field. These (simple extensions are motivated by multiple sequence alignments often containing long stretches of gaps which, as a data feature, would be rather untypical for independent samples drawn from a Potts model. Using a large test set of proteins we show that the combined improvements along the three dimensions are as large as any reported to date.

  17. Connecting active to passive fluorescence with photosynthesis: a method for evaluating remote sensing measurements of Chl fluorescence.

    Science.gov (United States)

    Magney, Troy S; Frankenberg, Christian; Fisher, Joshua B; Sun, Ying; North, Gretchen B; Davis, Thomas S; Kornfeld, Ari; Siebke, Katharina

    2017-09-01

    Recent advances in the retrieval of Chl fluorescence from space using passive methods (solar-induced Chl fluorescence, SIF) promise improved mapping of plant photosynthesis globally. However, unresolved issues related to the spatial, spectral, and temporal dynamics of vegetation fluorescence complicate our ability to interpret SIF measurements. We developed an instrument to measure leaf-level gas exchange simultaneously with pulse-amplitude modulation (PAM) and spectrally resolved fluorescence over the same field of view - allowing us to investigate the relationships between active and passive fluorescence with photosynthesis. Strongly correlated, slope-dependent relationships were observed between measured spectra across all wavelengths (F λ , 670-850 nm) and PAM fluorescence parameters under a range of actinic light intensities (steady-state fluorescence yields, F t ) and saturation pulses (maximal fluorescence yields, F m ). Our results suggest that this method can accurately reproduce the full Chl emission spectra - capturing the spectral dynamics associated with changes in the yields of fluorescence, photochemical (ΦPSII), and nonphotochemical quenching (NPQ). We discuss how this method may establish a link between photosynthetic capacity and the mechanistic drivers of wavelength-specific fluorescence emission during changes in environmental conditions (light, temperature, humidity). Our emphasis is on future research directions linking spectral fluorescence to photosynthesis, ΦPSII, and NPQ. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  18. Remote detection of water stress conditions via a diurnal photochemical reflectance index (PRI) improves yield prediction in rainfed wheat

    Science.gov (United States)

    Magney, T. S.; Vierling, L. A.; Eitel, J.

    2014-12-01

    Employing remotely sensed techniques to quantify the existence and magnitude of midday photosynthetic downregulation using the photochemical reflectance index (PRI) may reveal new information about plant responses to abiotic stressors in space and time. However, the interpretation and application of the PRI can be confounded because of its sensitivity to several variables changing at the diurnal (e.g., irradiation, shadow fraction) and seasonal (e.g., leaf area, chlorophyll and carotene pigment concentrations, irradiation) time scales. We explored different techniques to correct the PRI for variations in canopy structure and relative chlorophyll content (ChlR) using highly temporally resolved (frequency = five minutes) in-situ radiometric measurements of PRI and the Normalized Difference Vegetation Index (NDVI) over eight soft white spring wheat (Triticum aestivum L.)field plots under varying nitrogen and soil water conditions over two seasons. Our results suggest that the influence of seasonal variation in canopy ChlR and LAI on the diurnally measured PRI (PRIdiurnal) can be minimized using simple correction techniques, therefore improving the strength of PRI as a tool to quantify abiotic stressors such as daily changes in soil volumetric water content (SVWC), and vapor pressure deficit (VPD). PRIdiurnal responded strongly to available nitrogen, and linearly tracked seasonal changes in SVWC, VPD, and stomatal conductance (gc). Utilizing the PRI as an indicator of stress, yield predictions significantly over greenness indices such as the NDVI. This study provides insight towards the future interpretation and scaling of PRI to quantify rapid changes in photosynthesis, and as an indicator of plant stress.

  19. Induction of malignant transformation in CHL-1 cells by exposure to tritiated water

    International Nuclear Information System (INIS)

    Zou Shu'ai; Wang Hui

    1992-01-01

    The induction of neoplastic transformation in CHL-1 cells by low-dose-rate exposure to tritiated water was reported. CHL-1 cells were exposed to tritiated water (9.25 x 10 5 - 3.7 x 10 6 Bq/mL) for 24-96 hours and the accumulated doses were estimated to be 0.055-0.88 Gy, respectively. Neoplastic transformation was found in all exposed cell groups. The morphological study and transplantation test was carried out for demonstration malignancy of the transformed cells and the results show that they are with the morphology and behaviour for malignant tumour cells. For CHL-1 cells exposed to various doses of tritiated water, transformation rates were found to be from 3.28% to 13.0% at dose of 0.055-0.88 Gy. In order to estimate RBE of tritium for malignant transformation in CHL-1 cells, the induction of malignant transformation in CHL-1 cells by exposure to 137 Cs gamma-rays was carried out at dose rates of 0.359 Gy/24 hr and transformation rates for irradiated CHL-1 cells were found to be from 2.59% to 13.4%. Based on these data, RBE of tritium for malignant transformation in CHL-1 cells was estimated to be 1.6

  20. An Improved Algorithm for Predicting Free Recalls

    Science.gov (United States)

    Laming, Donald

    2008-01-01

    Laming [Laming, D. (2006). "Predicting free recalls." "Journal of Experimental Psychology: Learning, Memory, and Cognition," 32, 1146-1163] has shown that, in a free-recall experiment in which the participants rehearsed out loud, entire sequences of recalls could be predicted, to a useful degree of precision, from the prior sequences of stimuli…

  1. Chl1 DNA helicase regulates Scc2 deposition specifically during DNA-replication in Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Soumya Rudra

    Full Text Available The conserved family of cohesin proteins that mediate sister chromatid cohesion requires Scc2, Scc4 for chromatin-association and Eco1/Ctf7 for conversion to a tethering competent state. A popular model, based on the notion that cohesins form huge ring-like structures, is that Scc2, Scc4 function is essential only during G1 such that sister chromatid cohesion results simply from DNA replisome passage through pre-loaded cohesin rings. In such a scenario, cohesin deposition during G1 is temporally uncoupled from Eco1-dependent establishment reactions that occur during S-phase. Chl1 DNA helicase (homolog of human ChlR1/DDX11 and BACH1/BRIP1/FANCJ helicases implicated in Fanconi anemia, breast and ovarian cancer and Warsaw Breakage Syndrome plays a critical role in sister chromatid cohesion, however, the mechanism through which Chl1 promotes cohesion remains poorly understood. Here, we report that Chl1 promotes Scc2 loading unto DNA such that both Scc2 and cohesin enrichment to chromatin are defective in chl1 mutant cells. The results further show that both Chl1 expression and chromatin-recruitment are tightly regulated through the cell cycle, peaking during S-phase. Importantly, kinetic ChIP studies reveals that Chl1 is required for Scc2 chromatin-association specifically during S-phase, but not during G1. Despite normal chromatin enrichment of both Scc2 and cohesin during G1, chl1 mutant cells exhibit severe chromosome segregation and cohesion defects--revealing that G1-loaded cohesins is insufficient to promote cohesion. Based on these findings, we propose a new model wherein S-phase cohesin loading occurs during DNA replication and in concert with both cohesion establishment and chromatin assembly reactions--challenging the notion that DNA replication fork navigates through or around pre-loaded cohesin rings.

  2. Mammalian ChlR1 has a role in heterochromatin organization

    International Nuclear Information System (INIS)

    Inoue, Akira; Hyle, Judith; Lechner, Mark S.; Lahti, Jill M.

    2011-01-01

    The ChlR1 DNA helicase, encoded by DDX11 gene, which is responsible for Warsaw breakage syndrome (WABS), has a role in sister-chromatid cohesion. In this study, we show that human ChlR1 deficient cells exhibit abnormal heterochromatin organization. While constitutive heterochromatin is discretely localized at perinuclear and perinucleolar regions in control HeLa cells, ChlR1-depleted cells showed dispersed localization of constitutive heterochromatin accompanied by disrupted centromere clustering. Cells isolated from Ddx11 -/- embryos also exhibited diffuse localization of centromeres and heterochromatin foci. Similar abnormalities were found in HeLa cells depleted of combinations of HP1α and HP1β. Immunofluorescence and chromatin immunoprecipitation showed a decreased level of HP1α at pericentric regions in ChlR1-depleted cells. Trimethyl-histone H3 at lysine 9 (H3K9-me3) was also modestly decreased at pericentric sequences. The abnormality in pericentric heterochromatin was further supported by decreased DNA methylation within major satellite repeats of Ddx11 -/- embryos. Furthermore, micrococcal nuclease (MNase) assay revealed a decreased chromatin density at the telomeres. These data suggest that in addition to a role in sister-chromatid cohesion, ChlR1 is also involved in the proper formation of heterochromatin, which in turn contributes to global nuclear organization and pleiotropic effects. -- Highlights: → New role for ChlR1 (DDX11), a cohesinopathy gene, in heterochromatin organization. → Loss of ChlR1 altered heterochromatin localization and centromere clustering. → Reduced ChlR1 levels also reduced HP1α and H3K9-me3 binding to pericentric DNA. → Decreased DNA methylation was found in pericentric repeats of Ddx11 -/- embryos. → These findings will aid in understanding the pathogenesis of Warsaw breakage syndrome.

  3. Improvements in disruption prediction at ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Aledda, R., E-mail: raffaele.aledda@diee.unica.it; Cannas, B., E-mail: cannas@diee.unica.it; Fanni, A., E-mail: fanni@diee.unica.it; Pau, A., E-mail: alessandro.pau@diee.unica.it; Sias, G., E-mail: giuliana.sias@diee.unica.it

    2015-10-15

    Highlights: • A disruption prediction system for AUG, based on a logistic model, is designed. • The length of the disruptive phase is set for each disruption in the training set. • The model is tested on dataset different from that used during the training phase. • The generalization capability and the aging of the model have been tested. • The predictor performance is compared with the locked mode detector. - Abstract: In large-scale tokamaks disruptions have the potential to create serious damage to the facility. Hence disruptions must be avoided, but, when a disruption is unavoidable, minimizing its severity is mandatory. A reliable detection of a disruptive event is required to trigger proper mitigation actions. To this purpose machine learning methods have been widely studied to design disruption prediction systems at ASDEX Upgrade. The training phase of the proposed approaches is based on the availability of disrupted and non-disrupted discharges. In literature disruptive configurations were assumed appearing into the last 45 ms of each disruption. Even if the achieved results in terms of correct predictions were good, it has to be highlighted that the choice of such a fixed temporal window might have limited the prediction performance. In fact, it generates confusing information in cases of disruptions with disruptive phase different from 45 ms. The assessment of a specific disruptive phase for each disruptive discharge represents a relevant issue in understanding the disruptive events. In this paper, the Mahalanobis distance is applied to define a specific disruptive phase for each disruption, and a logistic regressor has been trained as disruption predictor. The results show that enhancements on the achieved performance on disruption prediction are possible by defining a specific disruptive phase for each disruption.

  4. Improvements in disruption prediction at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Aledda, R.; Cannas, B.; Fanni, A.; Pau, A.; Sias, G.

    2015-01-01

    Highlights: • A disruption prediction system for AUG, based on a logistic model, is designed. • The length of the disruptive phase is set for each disruption in the training set. • The model is tested on dataset different from that used during the training phase. • The generalization capability and the aging of the model have been tested. • The predictor performance is compared with the locked mode detector. - Abstract: In large-scale tokamaks disruptions have the potential to create serious damage to the facility. Hence disruptions must be avoided, but, when a disruption is unavoidable, minimizing its severity is mandatory. A reliable detection of a disruptive event is required to trigger proper mitigation actions. To this purpose machine learning methods have been widely studied to design disruption prediction systems at ASDEX Upgrade. The training phase of the proposed approaches is based on the availability of disrupted and non-disrupted discharges. In literature disruptive configurations were assumed appearing into the last 45 ms of each disruption. Even if the achieved results in terms of correct predictions were good, it has to be highlighted that the choice of such a fixed temporal window might have limited the prediction performance. In fact, it generates confusing information in cases of disruptions with disruptive phase different from 45 ms. The assessment of a specific disruptive phase for each disruptive discharge represents a relevant issue in understanding the disruptive events. In this paper, the Mahalanobis distance is applied to define a specific disruptive phase for each disruption, and a logistic regressor has been trained as disruption predictor. The results show that enhancements on the achieved performance on disruption prediction are possible by defining a specific disruptive phase for each disruption.

  5. Recent Improvements in IERS Rapid Service/Prediction Center Products

    National Research Council Canada - National Science Library

    Stamatakos, N; Luzum, B; Wooden, W

    2007-01-01

    ...) at USNO has made several improvements to its combination and pre- diction products. These improvements are due to the inclusion of new input data sources as well as modifications to the combination and prediction algorithms...

  6. Improved techniques for predicting spacecraft power

    International Nuclear Information System (INIS)

    Chmielewski, A.B.

    1987-01-01

    Radioisotope Thermoelectric Generators (RTGs) are going to supply power for the NASA Galileo and Ulysses spacecraft now scheduled to be launched in 1989 and 1990. The duration of the Galileo mission is expected to be over 8 years. This brings the total RTG lifetime to 13 years. In 13 years, the RTG power drops more than 20 percent leaving a very small power margin over what is consumed by the spacecraft. Thus it is very important to accurately predict the RTG performance and be able to assess the magnitude of errors involved. The paper lists all the error sources involved in the RTG power predictions and describes a statistical method for calculating the tolerance

  7. Improving LMA predictions with non standard interactions

    CERN Document Server

    Das, C R

    2010-01-01

    It has been known for some time that the well established LMA solution to the observed solar neutrino deficit fails to predict a flat energy spectrum for SuperKamiokande as opposed to what the data indicates. It also leads to a Chlorine rate which appears to be too high as compared to the data. We investigate the possible solution to these inconsistencies with non standard neutrino interactions, assuming that they come as extra contributions to the $\

  8. Evolution of light-harvesting complex proteins from Chl c-containing algae

    Directory of Open Access Journals (Sweden)

    Puerta M Virginia

    2011-04-01

    Full Text Available Abstract Background Light harvesting complex (LHC proteins function in photosynthesis by binding chlorophyll (Chl and carotenoid molecules that absorb light and transfer the energy to the reaction center Chl of the photosystem. Most research has focused on LHCs of plants and chlorophytes that bind Chl a and b and extensive work on these proteins has uncovered a diversity of biochemical functions, expression patterns and amino acid sequences. We focus here on a less-studied family of LHCs that typically bind Chl a and c, and that are widely distributed in Chl c-containing and other algae. Previous phylogenetic analyses of these proteins suggested that individual algal lineages possess proteins from one or two subfamilies, and that most subfamilies are characteristic of a particular algal lineage, but genome-scale datasets had revealed that some species have multiple different forms of the gene. Such observations also suggested that there might have been an important influence of endosymbiosis in the evolution of LHCs. Results We reconstruct a phylogeny of LHCs from Chl c-containing algae and related lineages using data from recent sequencing projects to give ~10-fold larger taxon sampling than previous studies. The phylogeny indicates that individual taxa possess proteins from multiple LHC subfamilies and that several LHC subfamilies are found in distantly related algal lineages. This phylogenetic pattern implies functional differentiation of the gene families, a hypothesis that is consistent with data on gene expression, carotenoid binding and physical associations with other LHCs. In all probability LHCs have undergone a complex history of evolution of function, gene transfer, and lineage-specific diversification. Conclusion The analysis provides a strikingly different picture of LHC diversity than previous analyses of LHC evolution. Individual algal lineages possess proteins from multiple LHC subfamilies. Evolutionary relationships showed

  9. Network information improves cancer outcome prediction.

    Science.gov (United States)

    Roy, Janine; Winter, Christof; Isik, Zerrin; Schroeder, Michael

    2014-07-01

    Disease progression in cancer can vary substantially between patients. Yet, patients often receive the same treatment. Recently, there has been much work on predicting disease progression and patient outcome variables from gene expression in order to personalize treatment options. Despite first diagnostic kits in the market, there are open problems such as the choice of random gene signatures or noisy expression data. One approach to deal with these two problems employs protein-protein interaction networks and ranks genes using the random surfer model of Google's PageRank algorithm. In this work, we created a benchmark dataset collection comprising 25 cancer outcome prediction datasets from literature and systematically evaluated the use of networks and a PageRank derivative, NetRank, for signature identification. We show that the NetRank performs significantly better than classical methods such as fold change or t-test. Despite an order of magnitude difference in network size, a regulatory and protein-protein interaction network perform equally well. Experimental evaluation on cancer outcome prediction in all of the 25 underlying datasets suggests that the network-based methodology identifies highly overlapping signatures over all cancer types, in contrast to classical methods that fail to identify highly common gene sets across the same cancer types. Integration of network information into gene expression analysis allows the identification of more reliable and accurate biomarkers and provides a deeper understanding of processes occurring in cancer development and progression. © The Author 2012. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Drivers and variability of the Chl fluorescence emission spectrum from the leaf through the canopy

    Science.gov (United States)

    Magney, T. S.; Frankenberg, C.; Grossman, K.; Koehler, P.; North, G.; Porcar-Castell, A.; Stutz, J.; Fisher, J.

    2017-12-01

    Recent advances in the retrieval of solar induced chlorophyll fluorescence (SIF) from remote sensing platforms provide a significant step towards mapping instantaneous plant photosynthesis across space and time. However, our current understanding of the variability and controls on the shape of the chlorophyll fluorescence (ChlF) spectrum is limited. To address these uncertainties, we have developed instrumentation to make highly resolved spectral measurements of SIF from both leaf and canopy scales. At the leaf scale, we simultaneously collected active (PAM) and passive (675-850 nm) fluorescence with photosynthesis across a range of species and conditions; and at the canopy scale, diurnal and seasonal Fraunhofer-based SIF retrievals across the red and far-red spectrum are made at four different flux tower sites (Costa Rica, Iowa (2), and Colorado). From both of these scales we are able to determine (1) the variability in steady-state spectra across species and individuals; and (2) the environmental, functional, and structural controls on SIF. Here we report on the sensitivity of SIF spectra from a singular value decomposition analysis; and present on the mechanisms - pigment concentration, species, non-photochemical and photochemical quenching, and environmental conditions - controlling SIF variability. Further, we will discuss how an improved understanding of leaf-level variability can inform canopy level SIF, and ultimately how such information may enable proper interpretation of satellite retrievals.

  11. Improving plant availability by predicting reactor trips

    International Nuclear Information System (INIS)

    Frank, M.V.; Epstein, S.A.

    1986-01-01

    Management Ahnalysis Company (MAC) has developed and applied two complementary software packages called RiTSE and RAMSES. Together they provide an mini-computer workstation for maintenance and operations personnel to dramatically reduce inadvertent reactor trips. They are intended to be used by those responsible at the plant for authorizing work during operation (such as a clearance coordinator or shift foreman in U.S. plants). They discover and represent all components, processes, and their interactions that could case a trip. They predict if future activities at the plant would cause a reactor trip, provide a reactor trip warning system and aid in post-trip cause analysis. RAMSES is a general reliability engineering software package that uses concepts of artificial intelligence to provide unique capabilities on personal and mini-computers

  12. Decadal climate predictions improved by ocean ensemble dispersion filtering

    Science.gov (United States)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its

  13. MicroRNA-Mediated Regulation of ITGB3 and CHL1 Is Implicated in SSRI Action

    Directory of Open Access Journals (Sweden)

    Keren Oved

    2017-11-01

    Full Text Available Background: Selective serotonin reuptake inhibitor (SSRI antidepressant drugs are the first-line of treatment for major depressive disorder (MDD but are effective in <70% of patients. Our earlier genome-wide studies indicated that two genes encoding for cell adhesion proteins, close homolog of L1 (CHL1 and integrin beta-3 (ITGB3, and microRNAs, miR-151a-3p and miR-221/222, are implicated in the variable sensitivity and response of human lymphoblastoid cell lines (LCL from unrelated individuals to SSRI drugs.Methods: The microRNAs miR-221, miR-222, and miR-151-a-3p, along with their target gene binding sites, were explored in silico using miRBase, TargetScan, microRNAviewer, and the UCSC Genome Browser. Luciferase reporter assays were conducted for demonstrating the direct functional regulation of ITGB3 and CHL1 expression by miR-221/222 and miR-151a-3p, respectively. A human LCL exhibiting low sensitivity to paroxetine was utilized for studying the phenotypic effect of CHL1 regulation by miR-151a-3p on SSRI response.Results: By showing direct regulation of CHL1 and ITGB3 by miR-151a-3p and miR-221/222, respectively, we link these microRNAs and genes with cellular SSRI sensitivity phenotypes. We report that miR-151a-3p increases cell sensitivity to paroxetine via down-regulating CHL1 expression.Conclusions: miR-151a-3p, miR-221/222 and their (here confirmed respective target-genes, CHL1 and ITGB3, are implicated in SSRI responsiveness, and possibly in the clinical response to antidepressant drugs.

  14. Text mining improves prediction of protein functional sites.

    Directory of Open Access Journals (Sweden)

    Karin M Verspoor

    Full Text Available We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites. The structure analysis was carried out using Dynamics Perturbation Analysis (DPA, which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions.

  15. Text Mining Improves Prediction of Protein Functional Sites

    Science.gov (United States)

    Cohn, Judith D.; Ravikumar, Komandur E.

    2012-01-01

    We present an approach that integrates protein structure analysis and text mining for protein functional site prediction, called LEAP-FS (Literature Enhanced Automated Prediction of Functional Sites). The structure analysis was carried out using Dynamics Perturbation Analysis (DPA), which predicts functional sites at control points where interactions greatly perturb protein vibrations. The text mining extracts mentions of residues in the literature, and predicts that residues mentioned are functionally important. We assessed the significance of each of these methods by analyzing their performance in finding known functional sites (specifically, small-molecule binding sites and catalytic sites) in about 100,000 publicly available protein structures. The DPA predictions recapitulated many of the functional site annotations and preferentially recovered binding sites annotated as biologically relevant vs. those annotated as potentially spurious. The text-based predictions were also substantially supported by the functional site annotations: compared to other residues, residues mentioned in text were roughly six times more likely to be found in a functional site. The overlap of predictions with annotations improved when the text-based and structure-based methods agreed. Our analysis also yielded new high-quality predictions of many functional site residues that were not catalogued in the curated data sources we inspected. We conclude that both DPA and text mining independently provide valuable high-throughput protein functional site predictions, and that integrating the two methods using LEAP-FS further improves the quality of these predictions. PMID:22393388

  16. The Q Motif Is Involved in DNA Binding but Not ATP Binding in ChlR1 Helicase.

    Directory of Open Access Journals (Sweden)

    Hao Ding

    Full Text Available Helicases are molecular motors that couple the energy of ATP hydrolysis to the unwinding of structured DNA or RNA and chromatin remodeling. The conversion of energy derived from ATP hydrolysis into unwinding and remodeling is coordinated by seven sequence motifs (I, Ia, II, III, IV, V, and VI. The Q motif, consisting of nine amino acids (GFXXPXPIQ with an invariant glutamine (Q residue, has been identified in some, but not all helicases. Compared to the seven well-recognized conserved helicase motifs, the role of the Q motif is less acknowledged. Mutations in the human ChlR1 (DDX11 gene are associated with a unique genetic disorder known as Warsaw Breakage Syndrome, which is characterized by cellular defects in genome maintenance. To examine the roles of the Q motif in ChlR1 helicase, we performed site directed mutagenesis of glutamine to alanine at residue 23 in the Q motif of ChlR1. ChlR1 recombinant protein was overexpressed and purified from HEK293T cells. ChlR1-Q23A mutant abolished the helicase activity of ChlR1 and displayed reduced DNA binding ability. The mutant showed impaired ATPase activity but normal ATP binding. A thermal shift assay revealed that ChlR1-Q23A has a melting point value similar to ChlR1-WT. Partial proteolysis mapping demonstrated that ChlR1-WT and Q23A have a similar globular structure, although some subtle conformational differences in these two proteins are evident. Finally, we found ChlR1 exists and functions as a monomer in solution, which is different from FANCJ, in which the Q motif is involved in protein dimerization. Taken together, our results suggest that the Q motif is involved in DNA binding but not ATP binding in ChlR1 helicase.

  17. CHARACTERIZATION OF PB2+ UPTAKE AND SEQUESTRATION IN PSEUDOMONAS AERUGINOSA CHL004

    Science.gov (United States)

    In laboratory studies, the soil isolate Pseudomonas aeruginosa CHL004 (Vesper et al 1996) has been found to concentrated Pb2+ in the cytoplasm by formation of particles that contain Pb2+ and phosphorus. Upon examination of the washed lyophilized cells grown in the presence of lea...

  18. Global Lakes Sentinel Services: Evaluation of Chl-a Trends in Deep Clear Lakes

    Science.gov (United States)

    Cazzaniga, Ilaria; Giardino, Claudia; Bresciani, Mariano; Poser, Kathrin; Peters, Steef; Hommersom, Annelies; Schenk, Karin; Heege, Thomas; Philipson, Petra; Ruescas, Ana; Bottcher, Martin; Stelzer, Kerstin

    2016-08-01

    The aim of this study is the analysis of trend in the trophic level evolution in clear deep lakes which, being characterised by good quality state, are important socio- economic resources for their regions. The selected lakes are situated in Europe (Garda, Maggiore, Constance and Vättern), North America (Michigan) and Africa (Malawi and Tanganyika) and cover a range of eco- regions (continental, perialpine, boreal, rift valley) distributed globally.To evaluate trophic level tendency we mainly focused on chlorophyll-a concentrations (chl-a) which is a direct proxy of trophic status. The chl-a concentrations were obtained from 5216 cloud-free MERIS imagery from 2002 to 2012.The 'GLaSS RoIStats tool' available within the GLaSS project was used to extract chl-a in a number of region of interests (ROI) located in pelagic waters as well as some few other stations depending on lakes morphology. For producing the time-series trend, these extracted data were analysed with the Seasonal Kendall test.The results overall show almost stable conditions with a slight increase in concentration for lakes Maggiore, Constance, and the Green Bay of Lake Michigan; a slight decrease for lakes Garda and Tanganyika and absolutely stable conditions for lakes Vättern and Malawi.The results presented in this work show the great capability of MERIS to perform trend tests analysis on trophic status with focus on chl-a concentration. Being chl-a also a key parameter in water quality monitoring plans, this study also supports the managing practices implemented worldwide for using the water of the lakes.

  19. Contrasting Chl-a responses to the tropical cyclones Thane and Phailin in the Bay of Bengal

    Digital Repository Service at National Institute of Oceanography (India)

    Vidya, P.J.; Das, S.; ManiMurali, R.

    cyclone (8-14 October 2013), and both occurred during the post-monsoon season. The present study examined the effect of cyclone intensity difference on the chlorophyll a (Chl-a) production in the BoB. Two and seven times Chl-a enhancement was observed...

  20. CNNcon: improved protein contact maps prediction using cascaded neural networks.

    Directory of Open Access Journals (Sweden)

    Wang Ding

    Full Text Available BACKGROUNDS: Despite continuing progress in X-ray crystallography and high-field NMR spectroscopy for determination of three-dimensional protein structures, the number of unsolved and newly discovered sequences grows much faster than that of determined structures. Protein modeling methods can possibly bridge this huge sequence-structure gap with the development of computational science. A grand challenging problem is to predict three-dimensional protein structure from its primary structure (residues sequence alone. However, predicting residue contact maps is a crucial and promising intermediate step towards final three-dimensional structure prediction. Better predictions of local and non-local contacts between residues can transform protein sequence alignment to structure alignment, which can finally improve template based three-dimensional protein structure predictors greatly. METHODS: CNNcon, an improved multiple neural networks based contact map predictor using six sub-networks and one final cascade-network, was developed in this paper. Both the sub-networks and the final cascade-network were trained and tested with their corresponding data sets. While for testing, the target protein was first coded and then input to its corresponding sub-networks for prediction. After that, the intermediate results were input to the cascade-network to finish the final prediction. RESULTS: The CNNcon can accurately predict 58.86% in average of contacts at a distance cutoff of 8 Å for proteins with lengths ranging from 51 to 450. The comparison results show that the present method performs better than the compared state-of-the-art predictors. Particularly, the prediction accuracy keeps steady with the increase of protein sequence length. It indicates that the CNNcon overcomes the thin density problem, with which other current predictors have trouble. This advantage makes the method valuable to the prediction of long length proteins. As a result, the effective

  1. Improved Modeling and Prediction of Surface Wave Amplitudes

    Science.gov (United States)

    2017-05-31

    AFRL-RV-PS- AFRL-RV-PS- TR-2017-0162 TR-2017-0162 IMPROVED MODELING AND PREDICTION OF SURFACE WAVE AMPLITUDES Jeffry L. Stevens, et al. Leidos...data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented...SUBTITLE Improved Modeling and Prediction of Surface Wave Amplitudes 5a. CONTRACT NUMBER FA9453-14-C-0225 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  2. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  3. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  4. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  5. Improving orbit prediction accuracy through supervised machine learning

    Science.gov (United States)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  6. Innovative predictive maintenance concepts to improve life cycle management

    NARCIS (Netherlands)

    Tinga, Tiedo

    2014-01-01

    For naval systems with typically long service lives, high sustainment costs and strict availability requirements, an effective and efficient life cycle management process is very important. In this paper four approaches are discussed to improve that process: physics of failure based predictive

  7. Kinetics of gas to particle conversion in the NH/sub 3/-Chl system

    Energy Technology Data Exchange (ETDEWEB)

    Luria, M; Cohen, B

    1980-01-01

    Particle formation in the reaction of NH/sub 3/ and Chl under 1 atm of N/sub 2/ and at 25/sup 0/C was studied in a flow reactor. The critical concentration below which NO particle can be formed was found to be 3.5 x 10/sup +14/ molecule/CM/sup 3/ for (NH/sub 3/)=(HCl). Above this concentration, gas-particle conversion percentage increases rapidly to approach 100%.

  8. Improving urban wind flow predictions through data assimilation

    Science.gov (United States)

    Sousa, Jorge; Gorle, Catherine

    2017-11-01

    Computational fluid dynamic is fundamentally important to several aspects in the design of sustainable and resilient urban environments. The prediction of the flow pattern for example can help to determine pedestrian wind comfort, air quality, optimal building ventilation strategies, and wind loading on buildings. However, the significant variability and uncertainty in the boundary conditions poses a challenge when interpreting results as a basis for design decisions. To improve our understanding of the uncertainties in the models and develop better predictive tools, we started a pilot field measurement campaign on Stanford University's campus combined with a detailed numerical prediction of the wind flow. The experimental data is being used to investigate the potential use of data assimilation and inverse techniques to better characterize the uncertainty in the results and improve the confidence in current wind flow predictions. We consider the incoming wind direction and magnitude as unknown parameters and perform a set of Reynolds-averaged Navier-Stokes simulations to build a polynomial chaos expansion response surface at each sensor location. We subsequently use an inverse ensemble Kalman filter to retrieve an estimate for the probabilistic density function of the inflow parameters. Once these distributions are obtained, the forward analysis is repeated to obtain predictions for the flow field in the entire urban canopy and the results are compared with the experimental data. We would like to acknowledge high-performance computing support from Yellowstone (ark:/85065/d7wd3xhc) provided by NCAR.

  9. Improved Wind Speed Prediction Using Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2018-05-01

    Full Text Available Wind power industry plays an important role in promoting the development of low-carbon economic and energy transformation in the world. However, the randomness and volatility of wind speed series restrict the healthy development of the wind power industry. Accurate wind speed prediction is the key to realize the stability of wind power integration and to guarantee the safe operation of the power system. In this paper, combined with the Empirical Mode Decomposition (EMD, the Radial Basis Function Neural Network (RBF and the Least Square Support Vector Machine (SVM, an improved wind speed prediction model based on Empirical Mode Decomposition (EMD-RBF-LS-SVM is proposed. The prediction result indicates that compared with the traditional prediction model (RBF, LS-SVM, the EMD-RBF-LS-SVM model can weaken the random fluctuation to a certain extent and improve the short-term accuracy of wind speed prediction significantly. In a word, this research will significantly reduce the impact of wind power instability on the power grid, ensure the power grid supply and demand balance, reduce the operating costs in the grid-connected systems, and enhance the market competitiveness of the wind power.

  10. Solar radio proxies for improved satellite orbit prediction

    Science.gov (United States)

    Yaya, Philippe; Hecker, Louis; Dudok de Wit, Thierry; Fèvre, Clémence Le; Bruinsma, Sean

    2017-12-01

    Specification and forecasting of solar drivers to thermosphere density models is critical for satellite orbit prediction and debris avoidance. Satellite operators routinely forecast orbits up to 30 days into the future. This requires forecasts of the drivers to these orbit prediction models such as the solar Extreme-UV (EUV) flux and geomagnetic activity. Most density models use the 10.7 cm radio flux (F10.7 index) as a proxy for solar EUV. However, daily measurements at other centimetric wavelengths have also been performed by the Nobeyama Radio Observatory (Japan) since the 1950's, thereby offering prospects for improving orbit modeling. Here we present a pre-operational service at the Collecte Localisation Satellites company that collects these different observations in one single homogeneous dataset and provides a 30 days forecast on a daily basis. Interpolation and preprocessing algorithms were developed to fill in missing data and remove anomalous values. We compared various empirical time series prediction techniques and selected a multi-wavelength non-recursive analogue neural network. The prediction of the 30 cm flux, and to a lesser extent that of the 10.7 cm flux, performs better than NOAA's present prediction of the 10.7 cm flux, especially during periods of high solar activity. In addition, we find that the DTM-2013 density model (Drag Temperature Model) performs better with (past and predicted) values of the 30 cm radio flux than with the 10.7 flux.

  11. Improved hybrid optimization algorithm for 3D protein structure prediction.

    Science.gov (United States)

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  12. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  13. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit; Dave, Akshat; Ghanem, Bernard

    2015-01-01

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  14. Machine Learning Principles Can Improve Hip Fracture Prediction

    DEFF Research Database (Denmark)

    Kruse, Christian; Eiken, Pia; Vestergaard, Peter

    2017-01-01

    Apply machine learning principles to predict hip fractures and estimate predictor importance in Dual-energy X-ray absorptiometry (DXA)-scanned men and women. Dual-energy X-ray absorptiometry data from two Danish regions between 1996 and 2006 were combined with national Danish patient data.......89 [0.82; 0.95], but with poor calibration in higher probabilities. A ten predictor subset (BMD, biochemical cholesterol and liver function tests, penicillin use and osteoarthritis diagnoses) achieved a test AUC of 0.86 [0.78; 0.94] using an “xgbTree” model. Machine learning can improve hip fracture...... prediction beyond logistic regression using ensemble models. Compiling data from international cohorts of longer follow-up and performing similar machine learning procedures has the potential to further improve discrimination and calibration....

  15. Predictive Maintenance: One key to improved power plant availability

    International Nuclear Information System (INIS)

    Mobley; Allen, J.W.

    1986-01-01

    Recent developments in microprocessor technology has provided the ability to routinely monitor the actual mechanical condition of all rotating and reciprocating machinery and process variables (i.e. pressure, temperature, flow, etc.) of other process equipment within an operating electric power generating plant. This direct correlation between frequency domain vibration and actual mechanical condition of machinery and trending process variables of non-rotating equipment can provide the ''key'' to improving the availability and reliability, thermal efficiency and provide the baseline information necessary for developing a realistic plan for extending the useful life of power plants. The premise of utilizing microprocessor-based Predictive Maintenance to improve power plant operation has been proven by a number of utilities. This paper provides a comprehensive discussion of the TEC approach to Predictive Maintenance and examples of successful programs

  16. Improved prediction of breast cancer outcome by identifying heterogeneous biomarkers.

    Science.gov (United States)

    Choi, Jonghwan; Park, Sanghyun; Yoon, Youngmi; Ahn, Jaegyoon

    2017-11-15

    Identification of genes that can be used to predict prognosis in patients with cancer is important in that it can lead to improved therapy, and can also promote our understanding of tumor progression on the molecular level. One of the common but fundamental problems that render identification of prognostic genes and prediction of cancer outcomes difficult is the heterogeneity of patient samples. To reduce the effect of sample heterogeneity, we clustered data samples using K-means algorithm and applied modified PageRank to functional interaction (FI) networks weighted using gene expression values of samples in each cluster. Hub genes among resulting prioritized genes were selected as biomarkers to predict the prognosis of samples. This process outperformed traditional feature selection methods as well as several network-based prognostic gene selection methods when applied to Random Forest. We were able to find many cluster-specific prognostic genes for each dataset. Functional study showed that distinct biological processes were enriched in each cluster, which seems to reflect different aspect of tumor progression or oncogenesis among distinct patient groups. Taken together, these results provide support for the hypothesis that our approach can effectively identify heterogeneous prognostic genes, and these are complementary to each other, improving prediction accuracy. https://github.com/mathcom/CPR. jgahn@inu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Identification of the chlE gene encoding oxygen-independent Mg-protoporphyrin IX monomethyl ester cyclase in cyanobacteria.

    Science.gov (United States)

    Yamanashi, Kaori; Minamizaki, Kei; Fujita, Yuichi

    2015-08-07

    The fifth ring (E-ring) of chlorophyll (Chl) a is produced by Mg-protoporphyrin IX monomethyl ester (MPE) cyclase. There are two evolutionarily unrelated MPE cyclases: oxygen-independent (BchE) and oxygen-dependent (ChlA/AcsF) MPE cyclases. Although ChlA is the sole MPE cyclase in Synechocystis PCC 6803, it is yet unclear whether BchE exists in cyanobacteria. A BLAST search suggests that only few cyanobacteria possess bchE. Here, we report that two bchE candidate genes from Cyanothece strains PCC 7425 and PCC 7822 restore the photosynthetic growth and bacteriochlorophyll production in a bchE-lacking mutant of Rhodobacter capsulatus. We termed these cyanobacterial bchE orthologs "chlE." Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  19. Developing Predictive Maintenance Expertise to Improve Plant Equipment Reliability

    International Nuclear Information System (INIS)

    Wurzbach, Richard N.

    2002-01-01

    On-line equipment condition monitoring is a critical component of the world-class production and safety histories of many successful nuclear plant operators. From addressing availability and operability concerns of nuclear safety-related equipment to increasing profitability through support system reliability and reduced maintenance costs, Predictive Maintenance programs have increasingly become a vital contribution to the maintenance and operation decisions of nuclear facilities. In recent years, significant advancements have been made in the quality and portability of many of the instruments being used, and software improvements have been made as well. However, the single most influential component of the success of these programs is the impact of a trained and experienced team of personnel putting this technology to work. Changes in the nature of the power generation industry brought on by competition, mergers, and acquisitions, has taken the historically stable personnel environment of power generation and created a very dynamic situation. As a result, many facilities have seen a significant turnover in personnel in key positions, including predictive maintenance personnel. It has become the challenge for many nuclear operators to maintain the consistent contribution of quality data and information from predictive maintenance that has become important in the overall equipment decision process. These challenges can be met through the implementation of quality training to predictive maintenance personnel and regular updating and re-certification of key technology holders. The use of data management tools and services aid in the sharing of information across sites within an operating company, and with experts who can contribute value-added data management and analysis. The overall effectiveness of predictive maintenance programs can be improved through the incorporation of newly developed comprehensive technology training courses. These courses address the use of

  20. Solar radio proxies for improved satellite orbit prediction

    Directory of Open Access Journals (Sweden)

    Yaya Philippe

    2017-01-01

    Full Text Available Specification and forecasting of solar drivers to thermosphere density models is critical for satellite orbit prediction and debris avoidance. Satellite operators routinely forecast orbits up to 30 days into the future. This requires forecasts of the drivers to these orbit prediction models such as the solar Extreme-UV (EUV flux and geomagnetic activity. Most density models use the 10.7 cm radio flux (F10.7 index as a proxy for solar EUV. However, daily measurements at other centimetric wavelengths have also been performed by the Nobeyama Radio Observatory (Japan since the 1950's, thereby offering prospects for improving orbit modeling. Here we present a pre-operational service at the Collecte Localisation Satellites company that collects these different observations in one single homogeneous dataset and provides a 30 days forecast on a daily basis. Interpolation and preprocessing algorithms were developed to fill in missing data and remove anomalous values. We compared various empirical time series prediction techniques and selected a multi-wavelength non-recursive analogue neural network. The prediction of the 30 cm flux, and to a lesser extent that of the 10.7 cm flux, performs better than NOAA's present prediction of the 10.7 cm flux, especially during periods of high solar activity. In addition, we find that the DTM-2013 density model (Drag Temperature Model performs better with (past and predicted values of the 30 cm radio flux than with the 10.7 flux.

  1. Benthic Light Availability Improves Predictions of Riverine Primary Production

    Science.gov (United States)

    Kirk, L.; Cohen, M. J.

    2017-12-01

    Light is a fundamental control on photosynthesis, and often the only control strongly correlated with gross primary production (GPP) in streams and rivers; yet it has received far less attention than nutrients. Because benthic light is difficult to measure in situ, surrogates such as open sky irradiance are often used. Several studies have now refined methods to quantify canopy and water column attenuation of open sky light in order to estimate the amount of light that actually reaches the benthos. Given the additional effort that measuring benthic light requires, we should ask if benthic light always improves our predictions of GPP compared to just open sky irradiance. We use long-term, high-resolution dissolved oxygen, turbidity, dissolved organic matter (fDOM), and irradiance data from streams and rivers in north-central Florida, US across gradients of size and color to build statistical models of benthic light that predict GPP. Preliminary results on a large, clear river show only modest model improvements over open sky irradiance, even in heavily canopied reaches with pulses of tannic water. However, in another spring-fed river with greater connectivity to adjacent wetlands - and hence larger, more frequent pulses of tannic water - the model improved dramatically with the inclusion of fDOM (model R2 improved from 0.28 to 0.68). River shade modeling efforts also suggest that knowing benthic light will greatly enhance our ability to predict GPP in narrower, forested streams flowing in particular directions. Our objective is to outline conditions where an assessment of benthic light conditions would be necessary for riverine metabolism studies or management strategies.

  2. Combining Gene Signatures Improves Prediction of Breast Cancer Survival

    Science.gov (United States)

    Zhao, Xi; Naume, Bjørn; Langerød, Anita; Frigessi, Arnoldo; Kristensen, Vessela N.; Børresen-Dale, Anne-Lise; Lingjærde, Ole Christian

    2011-01-01

    Background Several gene sets for prediction of breast cancer survival have been derived from whole-genome mRNA expression profiles. Here, we develop a statistical framework to explore whether combination of the information from such sets may improve prediction of recurrence and breast cancer specific death in early-stage breast cancers. Microarray data from two clinically similar cohorts of breast cancer patients are used as training (n = 123) and test set (n = 81), respectively. Gene sets from eleven previously published gene signatures are included in the study. Principal Findings To investigate the relationship between breast cancer survival and gene expression on a particular gene set, a Cox proportional hazards model is applied using partial likelihood regression with an L2 penalty to avoid overfitting and using cross-validation to determine the penalty weight. The fitted models are applied to an independent test set to obtain a predicted risk for each individual and each gene set. Hierarchical clustering of the test individuals on the basis of the vector of predicted risks results in two clusters with distinct clinical characteristics in terms of the distribution of molecular subtypes, ER, PR status, TP53 mutation status and histological grade category, and associated with significantly different survival probabilities (recurrence: p = 0.005; breast cancer death: p = 0.014). Finally, principal components analysis of the gene signatures is used to derive combined predictors used to fit a new Cox model. This model classifies test individuals into two risk groups with distinct survival characteristics (recurrence: p = 0.003; breast cancer death: p = 0.001). The latter classifier outperforms all the individual gene signatures, as well as Cox models based on traditional clinical parameters and the Adjuvant! Online for survival prediction. Conclusion Combining the predictive strength of multiple gene signatures improves prediction of breast

  3. Combining gene signatures improves prediction of breast cancer survival.

    Directory of Open Access Journals (Sweden)

    Xi Zhao

    Full Text Available BACKGROUND: Several gene sets for prediction of breast cancer survival have been derived from whole-genome mRNA expression profiles. Here, we develop a statistical framework to explore whether combination of the information from such sets may improve prediction of recurrence and breast cancer specific death in early-stage breast cancers. Microarray data from two clinically similar cohorts of breast cancer patients are used as training (n = 123 and test set (n = 81, respectively. Gene sets from eleven previously published gene signatures are included in the study. PRINCIPAL FINDINGS: To investigate the relationship between breast cancer survival and gene expression on a particular gene set, a Cox proportional hazards model is applied using partial likelihood regression with an L2 penalty to avoid overfitting and using cross-validation to determine the penalty weight. The fitted models are applied to an independent test set to obtain a predicted risk for each individual and each gene set. Hierarchical clustering of the test individuals on the basis of the vector of predicted risks results in two clusters with distinct clinical characteristics in terms of the distribution of molecular subtypes, ER, PR status, TP53 mutation status and histological grade category, and associated with significantly different survival probabilities (recurrence: p = 0.005; breast cancer death: p = 0.014. Finally, principal components analysis of the gene signatures is used to derive combined predictors used to fit a new Cox model. This model classifies test individuals into two risk groups with distinct survival characteristics (recurrence: p = 0.003; breast cancer death: p = 0.001. The latter classifier outperforms all the individual gene signatures, as well as Cox models based on traditional clinical parameters and the Adjuvant! Online for survival prediction. CONCLUSION: Combining the predictive strength of multiple gene signatures improves

  4. Improvement of energy expenditure prediction from heart rate during running

    International Nuclear Information System (INIS)

    Charlot, Keyne; Borne, Rachel; Richalet, Jean-Paul; Chapelot, Didier; Pichon, Aurélien; Cornolo, Jérémy; Brugniaux, Julien Vincent

    2014-01-01

    We aimed to develop new equations that predict exercise-induced energy expenditure (EE) more accurately than previous ones during running by including new parameters as fitness level, body composition and/or running intensity in addition to heart rate (HR). Original equations predicting EE were created from data obtained during three running intensities (25%, 50% and 70% of HR reserve) performed by 50 subjects. Five equations were conserved according to their accuracy assessed from error rates, interchangeability and correlations analyses: one containing only basic parameters, two containing VO 2max  or speed at VO 2max  and two including running speed with or without HR. Equations accuracy was further tested in an independent sample during a 40 min validation test at 50% of HR reserve. It appeared that: (1) the new basic equation was more accurate than pre-existing equations (R 2  0.809 versus. 0,737 respectively); (2) the prediction of EE was more accurate with the addition of VO 2max  (R 2  = 0.879); and (3) the equations containing running speed were the most accurate and were considered to have good agreement with indirect calorimetry. In conclusion, EE estimation during running might be significantly improved by including running speed in the predictive models, a parameter readily available with treadmill or GPS. (paper)

  5. Healthy, wealthy, and wise: retirement planning predicts employee health improvements.

    Science.gov (United States)

    Gubler, Timothy; Pierce, Lamar

    2014-09-01

    Are poor physical and financial health driven by the same underlying psychological factors? We found that the decision to contribute to a 401(k) retirement plan predicted whether an individual acted to correct poor physical-health indicators revealed during an employer-sponsored health examination. Using this examination as a quasi-exogenous shock to employees' personal-health knowledge, we examined which employees were more likely to improve their health, controlling for differences in initial health, demographics, job type, and income. We found that existing retirement-contribution patterns and future health improvements were highly correlated. Employees who saved for the future by contributing to a 401(k) showed improvements in their abnormal blood-test results and health behaviors approximately 27% more often than noncontributors did. These findings are consistent with an underlying individual time-discounting trait that is both difficult to change and domain interdependent, and that predicts long-term individual behaviors in multiple dimensions. © The Author(s) 2014.

  6. Improving Permafrost Hydrology Prediction Through Data-Model Integration

    Science.gov (United States)

    Wilson, C. J.; Andresen, C. G.; Atchley, A. L.; Bolton, W. R.; Busey, R.; Coon, E.; Charsley-Groffman, L.

    2017-12-01

    The CMIP5 Earth System Models were unable to adequately predict the fate of the 16GT of permafrost carbon in a warming climate due to poor representation of Arctic ecosystem processes. The DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic project aims to reduce uncertainty in the Arctic carbon cycle and its impact on the Earth's climate system by improved representation of the coupled physical, chemical and biological processes that drive how much buried carbon will be converted to CO2 and CH4, how fast this will happen, which form will dominate, and the degree to which increased plant productivity will offset increased soil carbon emissions. These processes fundamentally depend on permafrost thaw rate and its influence on surface and subsurface hydrology through thermal erosion, land subsidence and changes to groundwater flow pathways as soil, bedrock and alluvial pore ice and massive ground ice melts. LANL and its NGEE colleagues are co-developing data and models to better understand controls on permafrost degradation and improve prediction of the evolution of permafrost and its impact on Arctic hydrology. The LANL Advanced Terrestrial Simulator was built using a state of the art HPC software framework to enable the first fully coupled 3-dimensional surface-subsurface thermal-hydrology and land surface deformation simulations to simulate the evolution of the physical Arctic environment. Here we show how field data including hydrology, snow, vegetation, geochemistry and soil properties, are informing the development and application of the ATS to improve understanding of controls on permafrost stability and permafrost hydrology. The ATS is being used to inform parameterizations of complex coupled physical, ecological and biogeochemical processes for implementation in the DOE ACME land model, to better predict the role of changing Arctic hydrology on the global climate system. LA-UR-17-26566.

  7. Improving consensus contact prediction via server correlation reduction.

    Science.gov (United States)

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  8. Improving consensus contact prediction via server correlation reduction

    Directory of Open Access Journals (Sweden)

    Xu Jinbo

    2009-05-01

    Full Text Available Abstract Background Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. Results In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Conclusion Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  9. Combining specificity determining and conserved residues improves functional site prediction

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2009-06-01

    Full Text Available Abstract Background Predicting the location of functionally important sites from protein sequence and/or structure is a long-standing problem in computational biology. Most current approaches make use of sequence conservation, assuming that amino acid residues conserved within a protein family are most likely to be functionally important. Most often these approaches do not consider many residues that act to define specific sub-functions within a family, or they make no distinction between residues important for function and those more relevant for maintaining structure (e.g. in the hydrophobic core. Many protein families bind and/or act on a variety of ligands, meaning that conserved residues often only bind a common ligand sub-structure or perform general catalytic activities. Results Here we present a novel method for functional site prediction based on identification of conserved positions, as well as those responsible for determining ligand specificity. We define Specificity-Determining Positions (SDPs, as those occupied by conserved residues within sub-groups of proteins in a family having a common specificity, but differ between groups, and are thus likely to account for specific recognition events. We benchmark the approach on enzyme families of known 3D structure with bound substrates, and find that in nearly all families residues predicted by SDPsite are in contact with the bound substrate, and that the addition of SDPs significantly improves functional site prediction accuracy. We apply SDPsite to various families of proteins containing known three-dimensional structures, but lacking clear functional annotations, and discusse several illustrative examples. Conclusion The results suggest a better means to predict functional details for the thousands of protein structures determined prior to a clear understanding of molecular function.

  10. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  11. Improved prediction of aerodynamic noise from wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Guidati, G.; Bareiss, R.; Wagner, S. [Univ. of Stuttgart, Inst. of Aerodynamics and Gasdynamics, Stuttgart (Germany)

    1997-12-31

    This paper focuses on an improved prediction model for inflow-turbulence noise which takes the true airfoil shape into account. Predictions are compared to the results of acoustic measurements on three 2D-models of 0.25 m chord. Two of the models have NACA-636xx airfoils of 12% and 18% relative thickness. The third airfoil was acoustically optimized by using the new prediction model. In the experiments the turbulence intensity of the flow was strongly increased by mounting a grid with 60 mm wide meshes and 12 mm thick rods onto the tunnel exhaust nozzle. The sound radiated from the airfoil was distinguished by the tunnel background noise by using an acoustic antenna consisting of a cross array of 36 microphones in total. An application of a standard beam-forming algorithm allows to determine how much noise is radiated from different parts of the models. This procedure normally results in a peak at the leading and trailing edge of the airfoil. The strength of the leading-edge peak is taken as the source strength for inflow-turbulence noise. (LN) 14 refs.

  12. Can biomechanical variables predict improvement in crouch gait?

    Science.gov (United States)

    Hicks, Jennifer L.; Delp, Scott L.; Schwartz, Michael H.

    2011-01-01

    Many patients respond positively to treatments for crouch gait, yet surgical outcomes are inconsistent and unpredictable. In this study, we developed a multivariable regression model to determine if biomechanical variables and other subject characteristics measured during a physical exam and gait analysis can predict which subjects with crouch gait will demonstrate improved knee kinematics on a follow-up gait analysis. We formulated the model and tested its performance by retrospectively analyzing 353 limbs of subjects who walked with crouch gait. The regression model was able to predict which subjects would demonstrate ‘improved’ and ‘unimproved’ knee kinematics with over 70% accuracy, and was able to explain approximately 49% of the variance in subjects’ change in knee flexion between gait analyses. We found that improvement in stance phase knee flexion was positively associated with three variables that were drawn from knowledge about the biomechanical contributors to crouch gait: i) adequate hamstrings lengths and velocities, possibly achieved via hamstrings lengthening surgery, ii) normal tibial torsion, possibly achieved via tibial derotation osteotomy, and iii) sufficient muscle strength. PMID:21616666

  13. Improved nucleic acid descriptors for siRNA efficacy prediction.

    Science.gov (United States)

    Sciabola, Simone; Cao, Qing; Orozco, Modesto; Faustino, Ignacio; Stanton, Robert V

    2013-02-01

    Although considerable progress has been made recently in understanding how gene silencing is mediated by the RNAi pathway, the rational design of effective sequences is still a challenging task. In this article, we demonstrate that including three-dimensional descriptors improved the discrimination between active and inactive small interfering RNAs (siRNAs) in a statistical model. Five descriptor types were used: (i) nucleotide position along the siRNA sequence, (ii) nucleotide composition in terms of presence/absence of specific combinations of di- and trinucleotides, (iii) nucleotide interactions by means of a modified auto- and cross-covariance function, (iv) nucleotide thermodynamic stability derived by the nearest neighbor model representation and (v) nucleic acid structure flexibility. The duplex flexibility descriptors are derived from extended molecular dynamics simulations, which are able to describe the sequence-dependent elastic properties of RNA duplexes, even for non-standard oligonucleotides. The matrix of descriptors was analysed using three statistical packages in R (partial least squares, random forest, and support vector machine), and the most predictive model was implemented in a modeling tool we have made publicly available through SourceForge. Our implementation of new RNA descriptors coupled with appropriate statistical algorithms resulted in improved model performance for the selection of siRNA candidates when compared with publicly available siRNA prediction tools and previously published test sets. Additional validation studies based on in-house RNA interference projects confirmed the robustness of the scoring procedure in prospective studies.

  14. Improving student success using predictive models and data visualisations

    Directory of Open Access Journals (Sweden)

    Hanan Ayad

    2012-08-01

    Full Text Available The need to educate a competitive workforce is a global problem. In the US, for example, despite billions of dollars spent to improve the educational system, approximately 35% of students never finish high school. The drop rate among some demographic groups is as high as 50–60%. At the college level in the US only 30% of students graduate from 2-year colleges in 3 years or less and approximately 50% graduate from 4-year colleges in 5 years or less. A basic challenge in delivering global education, therefore, is improving student success. By student success we mean improving retention, completion and graduation rates. In this paper we describe a Student Success System (S3 that provides a holistic, analytical view of student academic progress.1 The core of S3 is a flexible predictive modelling engine that uses machine intelligence and statistical techniques to identify at-risk students pre-emptively. S3 also provides a set of advanced data visualisations for reaching diagnostic insights and a case management tool for managing interventions. S3's open modular architecture will also allow integration and plug-ins with both open and proprietary software. Powered by learning analytics, S3 is intended as an end-to-end solution for identifying at-risk students, understanding why they are at risk, designing interventions to mitigate that risk and finally closing the feedback look by tracking the efficacy of the applied intervention.

  15. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    Science.gov (United States)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  16. Improved prediction and tracking of volcanic ash clouds

    Science.gov (United States)

    Mastin, Larry G.; Webley, Peter

    2009-01-01

    During the past 30??years, more than 100 airplanes have inadvertently flown through clouds of volcanic ash from erupting volcanoes. Such encounters have caused millions of dollars in damage to the aircraft and have endangered the lives of tens of thousands of passengers. In a few severe cases, total engine failure resulted when ash was ingested into turbines and coating turbine blades. These incidents have prompted the establishment of cooperative efforts by the International Civil Aviation Organization and the volcanological community to provide rapid notification of eruptive activity, and to monitor and forecast the trajectories of ash clouds so that they can be avoided by air traffic. Ash-cloud properties such as plume height, ash concentration, and three-dimensional ash distribution have been monitored through non-conventional remote sensing techniques that are under active development. Forecasting the trajectories of ash clouds has required the development of volcanic ash transport and dispersion models that can calculate the path of an ash cloud over the scale of a continent or a hemisphere. Volcanological inputs to these models, such as plume height, mass eruption rate, eruption duration, ash distribution with altitude, and grain-size distribution, must be assigned in real time during an event, often with limited observations. Databases and protocols are currently being developed that allow for rapid assignment of such source parameters. In this paper, we summarize how an interdisciplinary working group on eruption source parameters has been instigating research to improve upon the current understanding of volcanic ash cloud characterization and predictions. Improved predictions of ash cloud movement and air fall will aid in making better hazard assessments for aviation and for public health and air quality. ?? 2008 Elsevier B.V.

  17. Photosystem Trap Energies and Spectrally-Dependent Energy-Storage Efficiencies in the Chl d-Utilizing Cyanobacterium, Acaryochloris Marina

    Science.gov (United States)

    Mielke, Steven P.; Kiang, Nancy Y.; Blankenship, Robert E.; Mauzerall, David

    2012-01-01

    Acaryochloris marina is the only species known to utilize chlorophyll (Chl) d as a principal photopigment. The peak absorption wavelength of Chl d is redshifted approx. 40 nm in vivo relative to Chl a, enabling this cyanobacterium to perform oxygenic phototrophy in niche environments enhanced in far-red light. We present measurements of the in vivo energy-storage (E-S) efficiency of photosynthesis in A. marina, obtained using pulsed photoacoustics (PA) over a 90-nm range of excitation wavelengths in the red and far-red. Together with modeling results, these measurements provide the first direct observation of the trap energies of PSI and PSII, and also the photosystem-specific contributions to the total E-S efficiency. We find the maximum observed efficiency in A. marina (40+/-1% at 735 nm) is higher than in the Chl a cyanobacterium Synechococcus leopoliensis (35+/-1% at 690 nm). The efficiency at peak absorption wavelength is also higher in A. marina (36+/-1% at 710 nm vs. 31+/-1% at 670 nm). In both species, the trap efficiencies are approx. 40% (PSI) and approx. 30% (PSII). The PSI trap in A. marina is found to lie at 740+/-5 nm, in agreement with the value inferred from spectroscopic methods. The best fit of the model to the PA data identifies the PSII trap at 723+/-3 nm, supporting the view that the primary electron-donor is Chl d, probably at the accessory (ChlD1) site. A decrease in efficiency beyond the trap wavelength, consistent with uphill energy transfer, is clearly observed and fit by the model. These results demonstrate that the E-S efficiency in A. marina is not thermodynamically limited, suggesting that oxygenic photosynthesis is viable in even redder light environments.

  18. Climatic extremes improve predictions of spatial patterns of tree species

    Science.gov (United States)

    Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.

    2009-01-01

    Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.

  19. Improved apparatus for predictive diagnosis of rotator cuff disease

    Science.gov (United States)

    Pillai, Anup; Hall, Brittany N.; Thigpen, Charles A.; Kwartowitz, David M.

    2014-03-01

    Rotator cuff disease impacts over 50% of the population over 60, with reports of incidence being as high as 90% within this population, causing pain and possible loss of function. The rotator cuff is composed of muscles and tendons that work in tandem to support the shoulder. Heavy use of these muscles can lead to rotator cuff tear, with the most common causes is age-related degeneration or sport injuries, both being a function of overuse. Tears ranges in severity from partial thickness tear to total rupture. Diagnostic techniques are based on physical assessment, detailed patient history, and medical imaging; primarily X-ray, MRI and ultrasonography are the chosen modalities for assessment. The final treatment technique and imaging modality; however, is chosen by the clinician is at their discretion. Ultrasound has been shown to have good accuracy for identification and measurement of full-thickness and partial-thickness rotator cuff tears. In this study, we report on the progress and improvement of our method of transduction and analysis of in situ measurement of rotator cuff biomechanics. We have improved the ability of the clinician to apply a uniform force to the underlying musculotendentious tissues while simultaneously obtaining the ultrasound image. This measurement protocol combined with region of interest (ROI) based image processing will help in developing a predictive diagnostic model for treatment of rotator cuff disease and help the clinicians choose the best treatment technique.

  20. Improving predictive capabilities of environmental change with GLOBE data

    Science.gov (United States)

    Robin, Jessica Hill

    This dissertation addresses two applications of Normalized Difference Vegetation Index (NDVI) essential for predicting environmental changes. The first study focuses on whether NDVI can improve model simulations of evapotranspiration for temperate Northern (>35°) regions. The second study focuses on whether NDVI can detect phenological changes in start of season (SOS) for high Northern (>60°) environments. The overall objectives of this research were to (1) develop a methodology for utilizing GLOBE data in NDVI research; and (2) provide a critical analysis of NDVI as a long-term monitoring tool for environmental change. GLOBE is an international partnership network of K-12 students, teachers, and scientists working together to study and understand the global environment. The first study utilized data collected by one GLOBE school in Greenville, Pennsylvania and the second utilized phenology observations made by GLOBE students in Alaska. Results from the first study showed NDVI could predict transpiration periods for environments like Greenville, Pennsylvania. In phenological terms, these environments have three distinct periods (QI, QII, and QIII). QI reflects onset of the growing season (mid March--mid May) when vegetation is greening up (NDVI 0.60). Results from the second study showed that a climate threshold of 153 +/- 22 growing degree days was a better predictor of SOS for Fairbanks than a NDVI threshold applied to temporal AVHRR and MODIS datasets. Accumulated growing degree days captured the interannual variability of SOS better than the NDVI threshold and most closely resembled actual SOS observations made by GLOBE students. Overall, biweekly composites and effects of clouds, snow, and conifers limit the ability of NDVI to monitor phenological changes in Alaska. Both studies did show that GLOBE data provides an important source of input and validation information for NDVI research.

  1. The Urgent Need for Improved Climate Models and Predictions

    Science.gov (United States)

    Goddard, Lisa; Baethgen, Walter; Kirtman, Ben; Meehl, Gerald

    2009-09-01

    An investment over the next 10 years of the order of US$2 billion for developing improved climate models was recommended in a report (http://wcrp.wmo.int/documents/WCRP_WorldModellingSummit_Jan2009.pdf) from the May 2008 World Modelling Summit for Climate Prediction, held in Reading, United Kingdom, and presented by the World Climate Research Programme. The report indicated that “climate models will, as in the past, play an important, and perhaps central, role in guiding the trillion dollar decisions that the peoples, governments and industries of the world will be making to cope with the consequences of changing climate.” If trillions of dollars are going to be invested in making decisions related to climate impacts, an investment of $2 billion, which is less than 0.1% of that amount, to provide better climate information seems prudent. One example of investment in adaptation is the World Bank's Climate Investment Fund, which has drawn contributions of more than $6 billion for work on clean technologies and adaptation efforts in nine pilot countries and two pilot regions. This is just the beginning of expenditures on adaptation efforts by the World Bank and other mechanisms, focusing on only a small fraction of the nations of the world and primarily aimed at anticipated anthropogenic climate change. Moreover, decisions are being made now, all around the world—by individuals, companies, and governments—that affect people and their livelihoods today, not just 50 or more years in the future. Climate risk management, whether related to projects of the scope of the World Bank's or to the planning and decisions of municipalities, will be best guided by meaningful climate information derived from observations of the past and model predictions of the future.

  2. Improving Flood Predictions in Data-Scarce Basins

    Science.gov (United States)

    Vimal, Solomon; Zanardo, Stefano; Rafique, Farhat; Hilberts, Arno

    2017-04-01

    Flood modeling methodology at Risk Management Solutions Ltd. has evolved over several years with the development of continental scale flood risk models spanning most of Europe, the United States and Japan. Pluvial (rain fed) and fluvial (river fed) flood maps represent the basis for the assessment of regional flood risk. These maps are derived by solving the 1D energy balance equation for river routing and 2D shallow water equation (SWE) for overland flow. The models are run with high performance computing and GPU based solvers as the time taken for simulation is large in such continental scale modeling. These results are validated with data from authorities and business partners, and have been used in the insurance industry for many years. While this methodology has been proven extremely effective in regions where the quality and availability of data are high, its application is very challenging in other regions where data are scarce. This is generally the case for low and middle income countries, where simpler approaches are needed for flood risk modeling and assessment. In this study we explore new methods to make use of modeling results obtained in data-rich contexts to improve predictive ability in data-scarce contexts. As an example, based on our modeled flood maps in data-rich countries, we identify statistical relationships between flood characteristics and topographic and climatic indicators, and test their generalization across physical domains. Moreover, we apply the Height Above Nearest Drainage (HAND)approach to estimate "probable" saturated areas for different return period flood events as functions of basin characteristics. This work falls into the well-established research field of Predictions in Ungauged Basins.

  3. Improving Clinical Prediction of Bipolar Spectrum Disorders in Youth

    Directory of Open Access Journals (Sweden)

    Thomas W. Frazier

    2014-03-01

    Full Text Available This report evaluates whether classification tree algorithms (CTA may improve the identification of individuals at risk for bipolar spectrum disorders (BPSD. Analyses used the Longitudinal Assessment of Manic Symptoms (LAMS cohort (629 youth, 148 with BPSD and 481 without BPSD. Parent ratings of mania symptoms, stressful life events, parenting stress, and parental history of mania were included as risk factors. Comparable overall accuracy was observed for CTA (75.4% relative to logistic regression (77.6%. However, CTA showed increased sensitivity (0.28 vs. 0.18 at the expense of slightly decreased specificity and positive predictive power. The advantage of CTA algorithms for clinical decision making is demonstrated by the combinations of predictors most useful for altering the probability of BPSD. The 24% sample probability of BPSD was substantially decreased in youth with low screening and baseline parent ratings of mania, negative parental history of mania, and low levels of stressful life events (2%. High screening plus high baseline parent-rated mania nearly doubled the BPSD probability (46%. Future work will benefit from examining additional, powerful predictors, such as alternative data sources (e.g., clinician ratings, neurocognitive test data; these may increase the clinical utility of CTA models further.

  4. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  5. Microbial community dynamics during the bioremediation process of chlorimuron-ethyl-contaminated soil by Hansschlegelia sp. strain CHL1.

    Directory of Open Access Journals (Sweden)

    Liqiang Yang

    Full Text Available Long-term and excessive application of chlorimuron-ethyl has led to a series of environmental problems. Strain Hansschlegelia sp. CHL1, a highly efficient chlorimuron-ethyl degrading bacterium isolated in our previous study, was employed in the current soil bioremediation study. The residues of chlorimuron-ethyl in soils were detected, and the changes of soil microbial communities were investigated by phospholipid fatty acid (PLFA analysis. The results showed that strain CHL1 exhibited significant chlorimuron-ethyl degradation ability at wide range of concentrations between 10μg kg-1 and 1000μg kg-1. High concentrations of chlorimuron-ethyl significantly decreased the total concentration of PLFAs and the Shannon-Wiener indices and increased the stress level of microbes in soils. The inoculation with strain CHL1, however, reduced the inhibition on soil microbes caused by chlorimuron-ethyl. The results demonstrated that strain CHL1 is effective in the remediation of chlorimuron-ethyl-contaminated soil, and has the potential to remediate chlorimuron-ethyl contaminated soils in situ.

  6. Microbial Community Dynamics during the Bioremediation Process of Chlorimuron-Ethyl-Contaminated Soil by Hansschlegelia sp. Strain CHL1

    Science.gov (United States)

    Yang, Liqiang; Li, Xinyu; Li, Xu; Su, Zhencheng; Zhang, Chenggang; Zhang, Huiwen

    2015-01-01

    Long-term and excessive application of chlorimuron-ethyl has led to a series of environmental problems. Strain Hansschlegelia sp. CHL1, a highly efficient chlorimuron-ethyl degrading bacterium isolated in our previous study, was employed in the current soil bioremediation study. The residues of chlorimuron-ethyl in soils were detected, and the changes of soil microbial communities were investigated by phospholipid fatty acid (PLFA) analysis. The results showed that strain CHL1 exhibited significant chlorimuron-ethyl degradation ability at wide range of concentrations between 10μg kg-1 and 1000μg kg-1. High concentrations of chlorimuron-ethyl significantly decreased the total concentration of PLFAs and the Shannon-Wiener indices and increased the stress level of microbes in soils. The inoculation with strain CHL1, however, reduced the inhibition on soil microbes caused by chlorimuron-ethyl. The results demonstrated that strain CHL1 is effective in the remediation of chlorimuron-ethyl-contaminated soil, and has the potential to remediate chlorimuron-ethyl contaminated soils in situ. PMID:25689050

  7. A predictive maintenance approach for improved nuclear plant availability

    International Nuclear Information System (INIS)

    Verma, R.M.P.; Pandya, M.B.; Kini, M.P.

    1979-01-01

    Predictive maintenance programme as against preventive maintenance programme aims at diagnosing, inspecting, monitoring, and objective condition-checking of equipment. It helps in forecasting failures, and scheduling the optimal frequencies for overhauls, replacements, lubrication etc. It also helps in establishing work load, manpower, resource planning and inventory control. Various stages of predictive maintenance programme for a nuclear power plant are outlined. A partial list of instruments for predictive maintenance is given. (M.G.B.)

  8. Wideband aural acoustic absorbance predicts conductive hearing loss in children.

    Science.gov (United States)

    Keefe, Douglas H; Sanford, Chris A; Ellison, John C; Fitzpatrick, Denis F; Gorga, Michael P

    2012-12-01

    This study tested the hypothesis that wideband aural absorbance predicts conductive hearing loss (CHL) in children medically classified as having otitis media with effusion. Absorbance was measured in the ear canal over frequencies from 0.25 to 8 kHz at ambient pressure or as a swept tympanogram. CHL was defined using criterion air-bone gaps of 20, 25, and 30 dB at octaves from 0.25 to 4 kHz. A likelihood-ratio predictor of CHL was constructed across frequency for ambient absorbance, and across frequency and pressure for absorbance tympanometry. Performance was evaluated at individual frequencies and for any frequency at which a CHL was present. Absorbance and conventional 0.226-kHz tympanograms were measured in children of age three to eight years with CHL and with normal hearing. Absorbance was smaller at frequencies above 0.7 kHz in the CHL group than the control group. Based on the area under the receiver operating characteristic curve, wideband absorbance in ambient and tympanometric tests were significantly better predictors of CHL than tympanometric width, the best 0.226-kHz predictor. Accuracies of ambient and tympanometric wideband absorbance did not differ. Absorbance accurately predicted CHL in children and was more accurate than conventional 0.226-kHz tympanometry.

  9. Disruption of the ndhF1 gene affects Chl fluorescence through state transition in the Cyanobacterium Synechocystis sp. PCC 6803, resulting in apparent high efficiency of photosynthesis.

    Science.gov (United States)

    Ogawa, Takako; Harada, Tetsuyuki; Ozaki, Hiroshi; Sonoike, Kintake

    2013-07-01

    In Synechocystis sp. PCC 6803, the disruption of the ndhF1 gene (slr0844), which encodes a subunit of one of the NDH-1 complexes (NDH-1L complex) serving for respiratory electron transfer, causes the largest change in Chl fluorescence induction kinetics among the kinetics of 750 disruptants searched in the Fluorome, the cyanobacterial Chl fluorescence database. The cause of the explicit phenotype of the ndhF1 disruptant was examined by measurements of the photosynthetic rate, Chl fluorescence and state transition. The results demonstrate that the defects in respiratory electron transfer obviously have great impact on Chl fluorescence in cyanobacteria. The inactivation of NDH-1L complexes involving electron transfer from NDH-1 to plastoquinone (PQ) would result in the oxidation of the PQ pool, leading to the transition to State 1, where the yield of Chl fluorescence is high. Apparently, respiration, although its rate is far lower than that of photosynthesis, could affect Chl fluorescence through the state transition as leverage. The disruption of the ndhF1 gene caused lower oxygen-evolving activity but the estimated electron transport rate from Chl fluorescence measurements was faster in the mutant than in the wild-type cells. The discrepancy could be ascribed to the decreased level of non-photochemical quenching due to state transition. One must be cautious when using the Chl fluorescence parameter to estimate photosynthesis in mutants defective in state transition.

  10. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  11. Using road topology to improve cyclist path prediction

    NARCIS (Netherlands)

    Pool, E.A.I.; Kooij, J.F.P.; Gavrila, D.; Ioannou, Petros; Zhang, Wei-Bin; Lu, Meng

    2017-01-01

    We learn motion models for cyclist path prediction on real-world tracks obtained from a moving vehicle, and propose to exploit the local road topology to obtain better predictive distributions. The tracks are extracted from the Tsinghua-Daimler Cyclist Benchmark for cyclist detection, and corrected

  12. Genomic selection: genome-wide prediction in plant improvement.

    Science.gov (United States)

    Desta, Zeratsion Abera; Ortiz, Rodomiro

    2014-09-01

    Association analysis is used to measure relations between markers and quantitative trait loci (QTL). Their estimation ignores genes with small effects that trigger underpinning quantitative traits. By contrast, genome-wide selection estimates marker effects across the whole genome on the target population based on a prediction model developed in the training population (TP). Whole-genome prediction models estimate all marker effects in all loci and capture small QTL effects. Here, we review several genomic selection (GS) models with respect to both the prediction accuracy and genetic gain from selection. Phenotypic selection or marker-assisted breeding protocols can be replaced by selection, based on whole-genome predictions in which phenotyping updates the model to build up the prediction accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Dynamic Filtering Improves Attentional State Prediction with fNIRS

    Science.gov (United States)

    Harrivel, Angela R.; Weissman, Daniel H.; Noll, Douglas C.; Huppert, Theodore; Peltier, Scott J.

    2016-01-01

    Brain activity can predict a person's level of engagement in an attentional task. However, estimates of brain activity are often confounded by measurement artifacts and systemic physiological noise. The optimal method for filtering this noise - thereby increasing such state prediction accuracy - remains unclear. To investigate this, we asked study participants to perform an attentional task while we monitored their brain activity with functional near infrared spectroscopy (fNIRS). We observed higher state prediction accuracy when noise in the fNIRS hemoglobin [Hb] signals was filtered with a non-stationary (adaptive) model as compared to static regression (84% +/- 6% versus 72% +/- 15%).

  14. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  15. Improving acute kidney injury diagnostics using predictive analytics.

    Science.gov (United States)

    Basu, Rajit K; Gist, Katja; Wheeler, Derek S

    2015-12-01

    Acute kidney injury (AKI) is a multifactorial syndrome affecting an alarming proportion of hospitalized patients. Although early recognition may expedite management, the ability to identify patients at-risk and those suffering real-time injury is inconsistent. The review will summarize the recent reports describing advancements in the area of AKI epidemiology, specifically focusing on risk scoring and predictive analytics. In the critical care population, the primary underlying factors limiting prediction models include an inability to properly account for patient heterogeneity and underperforming metrics used to assess kidney function. Severity of illness scores demonstrate limited AKI predictive performance. Recent evidence suggests traditional methods for detecting AKI may be leveraged and ultimately replaced by newer, more sophisticated analytical tools capable of prediction and identification: risk stratification, novel AKI biomarkers, and clinical information systems. Additionally, the utility of novel biomarkers may be optimized through targeting using patient context, and may provide more granular information about the injury phenotype. Finally, manipulation of the electronic health record allows for real-time recognition of injury. Integrating a high-functioning clinical information system with risk stratification methodology and novel biomarker yields a predictive analytic model for AKI diagnostics.

  16. Improved fuzzy PID controller design using predictive functional control structure.

    Science.gov (United States)

    Wang, Yuzhong; Jin, Qibing; Zhang, Ridong

    2017-11-01

    In conventional PID scheme, the ensemble control performance may be unsatisfactory due to limited degrees of freedom under various kinds of uncertainty. To overcome this disadvantage, a novel PID control method that inherits the advantages of fuzzy PID control and the predictive functional control (PFC) is presented and further verified on the temperature model of a coke furnace. Based on the framework of PFC, the prediction of the future process behavior is first obtained using the current process input signal. Then, the fuzzy PID control based on the multi-step prediction is introduced to acquire the optimal control law. Finally, the case study on a temperature model of a coke furnace shows the effectiveness of the fuzzy PID control scheme when compared with conventional PID control and fuzzy self-adaptive PID control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Verification and improvement of a predictive model for radionuclide migration

    International Nuclear Information System (INIS)

    Miller, C.W.; Benson, L.V.; Carnahan, C.L.

    1982-01-01

    Prediction of the rates of migration of contaminant chemical species in groundwater flowing through toxic waste repositories is essential to the assessment of a repository's capability of meeting standards for release rates. A large number of chemical transport models, of varying degrees of complexity, have been devised for the purpose of providing this predictive capability. In general, the transport of dissolved chemical species through a water-saturated porous medium is influenced by convection, diffusion/dispersion, sorption, formation of complexes in the aqueous phase, and chemical precipitation. The reliability of predictions made with the models which omit certain of these processes is difficult to assess. A numerical model, CHEMTRN, has been developed to determine which chemical processes govern radionuclide migration. CHEMTRN builds on a model called MCCTM developed previously by Lichtner and Benson

  18. Genomic instability induced by 137Cs γ-ray irradiation in CHL surviving cells

    International Nuclear Information System (INIS)

    Yue Jingyin; Liu Bingchen; Wu Hongying; Zhou Jiwen; Mu Chuanjie

    1999-01-01

    Objective: To study in parallel several possible manifestations of instability of surviving CHL cells after irradiation, namely the frequencies of mutation at locus, micronuclei and apoptosis. Methods: The frequencies of mutation at HGPRT locus, micronuclei and apoptosis were assayed at various times in surviving cells irradiated with γ-rays. Results: The surviving cells showed a persistently increased frequency of mutation at the HGPRT locus after irradiation until 53 days. Mutant fraction as high as 10 -4 was scored, tens of times higher than those assayed in control cells studied in parallel. The frequency of bi nucleated cells with micronuclei determined within 24 hours after irradiation increased with dose and reached a peak value of (26.58 +- 2.48)% at 3 Gy, decreasing at higher doses to a plateau around 20%. The micronucleus frequency decreased steeply to about (14.47 +- 2.39)% within the first 3 days post-irradiation, and fluctuated at around 10% up to 56 days post-irradiation. The delayed efficiency of irradiated cells was significantly decreased. The frequency of apoptosis peaked about (24.90 +- 4.72)% at 10 Gy 48 h post-irradiation (γ-ray dose between 3-10 Gy) and then decreased to about 12% within 3 days. It was significantly higher than in control cells until 14 days. Conclusions: It shows that genomic instability induced by radiation can be transmitted to the progeny of surviving cells and may take many forms of expression such as lethal mutation, chromosome aberrations, gene mutation, etc

  19. Improving Marital Prediction: A Model and a Pilot Study.

    Science.gov (United States)

    Dean, Dwight G.; Lucas, Wayne L.

    A model for the prediction of marital adjustment is proposed which presents selected social background factors (e.g., education) and interactive factors (e.g., Bienvenu's Communication scale, Hurvitz' Role Inventory, Dean's Emotional Maturity and Commitment scales, Rosenberg's Self-Esteem scale) in order to account for as much of the variance in…

  20. How Predictive Analytics and Choice Architecture Can Improve Student Success

    Science.gov (United States)

    Denley, Tristan

    2014-01-01

    This article explores the challenges that students face in navigating the curricular structure of post-secondary degree programs, and how predictive analytics and choice architecture can play a role. It examines Degree Compass, a course recommendation system that successfully pairs current students with the courses that best fit their talents and…

  1. An improved technique for the prediction of optimal image resolution ...

    African Journals Online (AJOL)

    user

    2010-10-04

    Oct 4, 2010 ... Available online at http://www.academicjournals.org/AJEST ... robust technique for predicting optimal image resolution for the mapping of savannah ecosystems was developed. .... whether to purchase multi-spectral imagery acquired by GeoEye-2 ..... Analysis of the spectral behaviour of the pasture class in.

  2. An improved technique for the prediction of optimal image resolution ...

    African Journals Online (AJOL)

    Past studies to predict optimal image resolution required for generating spatial information for savannah ecosystems have yielded different outcomes, hence providing a knowledge gap that was investigated in the present study. The postulation, for the present study, was that by graphically solving two simultaneous ...

  3. Combining disparate data sources for improved poverty prediction and mapping.

    Science.gov (United States)

    Pokhriyal, Neeti; Jacques, Damien Christophe

    2017-11-14

    More than 330 million people are still living in extreme poverty in Africa. Timely, accurate, and spatially fine-grained baseline data are essential to determining policy in favor of reducing poverty. The potential of "Big Data" to estimate socioeconomic factors in Africa has been proven. However, most current studies are limited to using a single data source. We propose a computational framework to accurately predict the Global Multidimensional Poverty Index (MPI) at a finest spatial granularity and coverage of 552 communes in Senegal using environmental data (related to food security, economic activity, and accessibility to facilities) and call data records (capturing individualistic, spatial, and temporal aspects of people). Our framework is based on Gaussian Process regression, a Bayesian learning technique, providing uncertainty associated with predictions. We perform model selection using elastic net regularization to prevent overfitting. Our results empirically prove the superior accuracy when using disparate data (Pearson correlation of 0.91). Our approach is used to accurately predict important dimensions of poverty: health, education, and standard of living (Pearson correlation of 0.84-0.86). All predictions are validated using deprivations calculated from census. Our approach can be used to generate poverty maps frequently, and its diagnostic nature is, likely, to assist policy makers in designing better interventions for poverty eradication. Copyright © 2017 the Author(s). Published by PNAS.

  4. Trajectory Analysis and Prediction for Improved Pedestrian Safety

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Trivedi, Mohan M.; Moeslund, Thomas B.

    2015-01-01

    This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pede...

  5. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  6. Selection procedures in sports: Improving predictions of athletes’ future performance

    NARCIS (Netherlands)

    den Hartigh, Jan Rudolf; Niessen, Anna; Frencken, Wouter; Meijer, Rob R.

    The selection of athletes has been a central topic in sports sciences for decades. Yet, little consideration has been given to the theoretical underpinnings and predictive validity of the procedures. In this paper, we evaluate current selection procedures in sports given what we know from the

  7. Improved part-of-speech prediction in suffix analysis.

    Directory of Open Access Journals (Sweden)

    Mario Fruzangohar

    Full Text Available MOTIVATION: Predicting the part of speech (POS tag of an unknown word in a sentence is a significant challenge. This is particularly difficult in biomedicine, where POS tags serve as an input to training sophisticated literature summarization techniques, such as those based on Hidden Markov Models (HMM. Different approaches have been taken to deal with the POS tagger challenge, but with one exception--the TnT POS tagger--previous publications on POS tagging have omitted details of the suffix analysis used for handling unknown words. The suffix of an English word is a strong predictor of a POS tag for that word. As a pre-requisite for an accurate HMM POS tagger for biomedical publications, we present an efficient suffix prediction method for integration into a POS tagger. RESULTS: We have implemented a fully functional HMM POS tagger using experimentally optimised suffix based prediction. Our simple suffix analysis method, significantly outperformed the probability interpolation based TnT method. We have also shown how important suffix analysis can be for probability estimation of a known word (in the training corpus with an unseen POS tag; a common scenario with a small training corpus. We then integrated this simple method in our POS tagger and determined an optimised parameter set for both methods, which can help developers to optimise their current algorithm, based on our results. We also introduce the concept of counting methods in maximum likelihood estimation for the first time and show how counting methods can affect the prediction result. Finally, we describe how machine-learning techniques were applied to identify words, for which prediction of POS tags were always incorrect and propose a method to handle words of this type. AVAILABILITY AND IMPLEMENTATION: Java source code, binaries and setup instructions are freely available at http://genomes.sapac.edu.au/text_mining/pos_tagger.zip.

  8. Bergmann glia and the recognition molecule CHL1 organize GABAergic axons and direct innervation of Purkinje cell dendrites.

    Directory of Open Access Journals (Sweden)

    Fabrice Ango

    2008-04-01

    Full Text Available The geometric and subcellular organization of axon arbors distributes and regulates electrical signaling in neurons and networks, but the underlying mechanisms have remained elusive. In rodent cerebellar cortex, stellate interneurons elaborate characteristic axon arbors that selectively innervate Purkinje cell dendrites and likely regulate dendritic integration. We used GFP BAC transgenic reporter mice to examine the cellular processes and molecular mechanisms underlying the development of stellate cell axons and their innervation pattern. We show that stellate axons are organized and guided towards Purkinje cell dendrites by an intermediate scaffold of Bergmann glial (BG fibers. The L1 family immunoglobulin protein Close Homologue of L1 (CHL1 is localized to apical BG fibers and stellate cells during the development of stellate axon arbors. In the absence of CHL1, stellate axons deviate from BG fibers and show aberrant branching and orientation. Furthermore, synapse formation between aberrant stellate axons and Purkinje dendrites is reduced and cannot be maintained, leading to progressive atrophy of axon terminals. These results establish BG fibers as a guiding scaffold and CHL1 a molecular signal in the organization of stellate axon arbors and in directing their dendritic innervation.

  9. Effect of 0.4 mT power frequency magnetic field on F-actin assembly of CHL cells

    International Nuclear Information System (INIS)

    Chu Keping; Cai Zhiyin; Zhang Yukun; Xia Nuohong

    2007-01-01

    Objective: To investigate the effect of 0.4mT power frequency magnetic field on the microfilament (F- actin) assembly of Chinese hamster lung (CHL) cells. Methods: F-actin were marked with immunohistochemical method, then observed under a confocal microscope. The content of ECFRs in the preparation of the detergent-insoluble cytoskeleton was measured with Western-blotting. Results: The stress fiber's of CHL cells decreased after exposure to 0.4mT power frequency magnetic field for 30min, as well as after treatment with epidermal growth factor (ECF) of 50nM. Filopodias appeared at the periphery after exposure to magnetic field as well as treatment with EGF. The EGF receptor mass associated with the detergent-insoluble cytoskeleton increased after exposure to magnetic field as well as treatment with EGF. Conclusion: 0.4mT power frequency magnetic field induced assembly of F-actin in CHL cells. The change induced by magnetic field would be related to clustering of EGFR induced by magnetic field and passing the signal down. (authors)

  10. Improving protein function prediction methods with integrated literature data

    Directory of Open Access Journals (Sweden)

    Gabow Aaron P

    2008-04-01

    Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder

  11. Improved predictions of nuclear data: A continued challenge in astrophysics

    International Nuclear Information System (INIS)

    Goriely, S.

    2001-01-01

    Although important effort has been devoted in the last decades to measure reaction cross sections and decay half-lives of interest in astrophysics, most of the nuclear astrophysics applications still require the use of theoretical predictions to estimate experimentally unknown rates. The nuclear ingredients to the reaction or weak interaction models should preferentially be estimated from microscopic or semi-microscopic global predictions based on sound and reliable nuclear models which, in turn, can compete with more phenomenological highly-parametrized models in the reproduction of experimental data. The latest developments made in deriving the nuclear inputs of relevance in astrophysics applications are reviewed. It mainly concerns nuclear structure properties (atomic masses, deformations, radii, etc...), nuclear level densities, nucleon and α-optical potentials, γ-ray and Gamow-Teller strength functions

  12. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

    OpenAIRE

    劉, 麗清

    2015-01-01

    Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

  13. Improving Transit Predictions of Known Exoplanets with TERMS

    Directory of Open Access Journals (Sweden)

    Mahadevan S.

    2011-02-01

    Full Text Available Transiting planet discoveries have largely been restricted to the short-period or low-periastron distance regimes due to the bias inherent in the geometric transit probability. Through the refinement of planetary orbital parameters, and hence reducing the size of transit windows, long-period planets become feasible targets for photometric follow-up. Here we describe the TERMS project that is monitoring these host stars at predicted transit times.

  14. Pan-Arctic sea ice-algal chl a biomass and suitable habitat are largely underestimated for multiyear ice.

    Science.gov (United States)

    Lange, Benjamin A; Flores, Hauke; Michel, Christine; Beckers, Justin F; Bublitz, Anne; Casey, John Alec; Castellani, Giulia; Hatam, Ido; Reppchen, Anke; Rudolph, Svenja A; Haas, Christian

    2017-11-01

    There is mounting evidence that multiyear ice (MYI) is a unique component of the Arctic Ocean and may play a more important ecological role than previously assumed. This study improves our understanding of the potential of MYI as a suitable habitat for sea ice algae on a pan-Arctic scale. We sampled sea ice cores from MYI and first-year sea ice (FYI) within the Lincoln Sea during four consecutive spring seasons. This included four MYI hummocks with a mean chl a biomass of 2.0 mg/m 2 , a value significantly higher than FYI and MYI refrozen ponds. Our results support the hypothesis that MYI hummocks can host substantial ice-algal biomass and represent a reliable ice-algal habitat due to the (quasi-) permanent low-snow surface of these features. We identified an ice-algal habitat threshold value for calculated light transmittance of 0.014%. Ice classes and coverage of suitable ice-algal habitat were determined from snow and ice surveys. These ice classes and associated coverage of suitable habitat were applied to pan-Arctic CryoSat-2 snow and ice thickness data products. This habitat classification accounted for the variability of the snow and ice properties and showed an areal coverage of suitable ice-algal habitat within the MYI-covered region of 0.54 million km 2 (8.5% of total ice area). This is 27 times greater than the areal coverage of 0.02 million km 2 (0.3% of total ice area) determined using the conventional block-model classification, which assigns single-parameter values to each grid cell and does not account for subgrid cell variability. This emphasizes the importance of accounting for variable snow and ice conditions in all sea ice studies. Furthermore, our results indicate the loss of MYI will also mean the loss of reliable ice-algal habitat during spring when food is sparse and many organisms depend on ice-algae. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  15. Improving 3D structure prediction from chemical shift data

    Energy Technology Data Exchange (ETDEWEB)

    Schot, Gijs van der [Utrecht University, Computational Structural Biology, Bijvoet Center for Biomolecular Research, Faculty of Science-Chemistry (Netherlands); Zhang, Zaiyong [Technische Universitaet Muenchen, Biomolecular NMR and Munich Center for Integrated Protein Science, Department Chemie (Germany); Vernon, Robert [University of Washington, Department of Biochemistry (United States); Shen, Yang [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States); Vranken, Wim F. [VIB, Department of Structural Biology (Belgium); Baker, David [University of Washington, Department of Biochemistry (United States); Bonvin, Alexandre M. J. J., E-mail: a.m.j.j.bonvin@uu.nl [Utrecht University, Computational Structural Biology, Bijvoet Center for Biomolecular Research, Faculty of Science-Chemistry (Netherlands); Lange, Oliver F., E-mail: oliver.lange@tum.de [Technische Universitaet Muenchen, Biomolecular NMR and Munich Center for Integrated Protein Science, Department Chemie (Germany)

    2013-09-15

    We report advances in the calculation of protein structures from chemical shift nuclear magnetic resonance data alone. Our previously developed method, CS-Rosetta, assembles structures from a library of short protein fragments picked from a large library of protein structures using chemical shifts and sequence information. Here we demonstrate that combination of a new and improved fragment picker and the iterative sampling algorithm RASREC yield significant improvements in convergence and accuracy. Moreover, we introduce improved criteria for assessing the accuracy of the models produced by the method. The method was tested on 39 proteins in the 50-100 residue size range and yields reliable structures in 70 % of the cases. All structures that passed the reliability filter were accurate (<2 A RMSD from the reference)

  16. Improving Prediction of Large-scale Regime Transitions

    Science.gov (United States)

    Gyakum, J. R.; Roebber, P.; Bosart, L. F.; Honor, A.; Bunker, E.; Low, Y.; Hart, J.; Bliankinshtein, N.; Kolly, A.; Atallah, E.; Huang, Y.

    2017-12-01

    Cool season atmospheric predictability over the CONUS on subseasonal times scales (1-4 weeks) is critically dependent upon the structure, configuration, and evolution of the North Pacific jet stream (NPJ). The NPJ can be perturbed on its tropical side on synoptic time scales by recurving and transitioning tropical cyclones (TCs) and on subseasonal time scales by longitudinally varying convection associated with the Madden-Julian Oscillation (MJO). Likewise, the NPJ can be perturbed on its poleward side on synoptic time scales by midlatitude and polar disturbances that originate over the Asian continent. These midlatitude and polar disturbances can often trigger downstream Rossby wave propagation across the North Pacific, North America, and the North Atlantic. The project team is investigating the following multiscale processes and features: the spatiotemporal distribution of cyclone clustering over the Northern Hemisphere; cyclone clustering as influenced by atmospheric blocking and the phases and amplitudes of the major teleconnection indices, ENSO and the MJO; composite and case study analyses of representative cyclone clustering events to establish the governing dynamics; regime change predictability horizons associated with cyclone clustering events; Arctic air mass generation and modification; life cycles of the MJO; and poleward heat and moisture transports of subtropical air masses. A critical component of the study is weather regime classification. These classifications are defined through: the spatiotemporal clustering of surface cyclogenesis; a general circulation metric combining data at 500-hPa and the dynamic tropopause; Self Organizing Maps (SOM), constructed from dynamic tropopause and 850 hPa equivalent potential temperature data. The resultant lattice of nodes is used to categorize synoptic classes and their predictability, as well as to determine the robustness of the CFSv2 model climate relative to observations. Transition pathways between these

  17. Improving Student Success Using Predictive Models and Data Visualisations

    Science.gov (United States)

    Essa, Alfred; Ayad, Hanan

    2012-01-01

    The need to educate a competitive workforce is a global problem. In the US, for example, despite billions of dollars spent to improve the educational system, approximately 35% of students never finish high school. The drop rate among some demographic groups is as high as 50-60%. At the college level in the US only 30% of students graduate from…

  18. Improving the TRIGA facility maintenance by predictive maintenance techniques

    International Nuclear Information System (INIS)

    Preda, M.; Sabau, C.; Barbalata, E.

    1997-01-01

    This work deals with the specific operation of equipment in radioactive environment or in conditions allowing radioactive contamination. The requirements of remote operation ensuring the operators' protection are presented. Also, the requirements of international standards issued by IAEA-Vienna are reviewed. The organizational withdraws of the maintenance activities, based on the standards and maintenance and repair directives still in force, are shown. It is emphasized the fact that this type of maintenance was adequate to a given level of technical development, characteristic for pre-computerized industry, but, at present, it is obsolete and uneconomic both in utilization and maintenance. Such a system constitutes already a burden hindering the efforts of maximizing the availability, maintenance, prolongation the service life of equipment and utilities, finally, of increasing the efficiency of complex installations. Moreover, the predictive maintenance techniques are strongly requested by the character of radioactive installations precluding the direct access in given zones (a potential risk of irradiation or radioactive contamination) of installations during operation. The results obtained by applying the predictive maintenance techniques in the operation of the double circuit irradiation loop, used in the TRIGA reactors, are presented

  19. Improving, characterizing and predicting the lifetime of organic photovoltaics

    DEFF Research Database (Denmark)

    Gevorgyan, Suren A.; Heckler, Ilona Maria; Bundgaard, Eva

    2017-01-01

    This review summarizes the recent progress in the stability and lifetime of organic photovoltaics (OPVs). In particular, recently proposed solutions to failure mechanisms in different layers of the device stack are discussed comprising both structural and chemical modifications. Upscaling...... characterization reported recently. Lifetime testing and determination is another challenge in the field of organic solar cells and the final sections of this review discuss the testing protocols as well as the generic marker for device lifetime and the methodology for comparing all the lifetime landmarks in one...... common diagram. These tools were used to determine the baselines for OPV lifetime tested under different ageing conditions. Finally, the current status of lifetime for organic solar cells is presented and predictions are made for progress in the near future....

  20. Predicting occurrence of juvenile shark habitat to improve conservation planning.

    Science.gov (United States)

    Oh, Beverly Z L; Sequeira, Ana M M; Meekan, Mark G; Ruppert, Jonathan L W; Meeuwig, Jessica J

    2017-06-01

    Fishing and habitat degradation have increased the extinction risk of sharks, and conservation strategies recognize that survival of juveniles is critical for the effective management of shark populations. Despite the rapid expansion of marine protected areas (MPAs) globally, the paucity of shark-monitoring data on large scales (100s-1000s km) means that the effectiveness of MPAs in halting shark declines remains unclear. Using data collected by baited remote underwater video systems (BRUVS) in northwestern Australia, we developed generalized linear models to elucidate the ecological drivers of habitat suitability for juvenile sharks. We assessed occurrence patterns at the order and species levels. We included all juvenile sharks sampled and the 3 most abundant species sampled separately (grey reef [Carcharhinus amblyrhynchos], sandbar [Carcharhinus plumbeus], and whitetip reef sharks [Triaenodon obesus]). We predicted the occurrence of juvenile sharks across 490,515 km 2 of coastal waters and quantified the representation of highly suitable habitats within MPAs. Our species-level models had higher accuracy (ĸ ≥ 0.69) and deviance explained (≥48%) than our order-level model (ĸ = 0.36 and deviance explained of 10%). Maps of predicted occurrence revealed different species-specific patterns of highly suitable habitat. These differences likely reflect different physiological or resource requirements between individual species and validate concerns over the utility of conservation targets based on aggregate species groups as opposed to a species-focused approach. Highly suitable habitats were poorly represented in MPAs with the most restrictions on extractive activities. This spatial mismatch possibly indicates a lack of explicit conservation targets and information on species distribution during the planning process. Non-extractive BRUVS provided a useful platform for building the suitability models across large scales to assist conservation planning across

  1. Too much food may cause reduced growth of blue mussels (Mytilus edulis) – Test of hypothesis and new ‘high Chl a BEG-model’

    DEFF Research Database (Denmark)

    Larsen, Poul S.; Lüskow, Florian; Riisgård, Hans Ulrik

    2018-01-01

    Growth of the blue mussel (Mytilus edulis) is closely related to the biomass of phytoplankton (expressed as concentration of chlorophyll a, Chl a), but the effect of too much food in eutrophicated areas has so far been overlooked. The hypothesis addressed in the present study suggests that high Chl...... a concentrations (> about 8 μg Chl a l−1) result in reduced growth because mussels are not evolutionarily adapted to utilize such high phytoplankton concentrations and to physiologically regulate the amount of ingested food in such a way that the growth rate remains high and constant. We first make a comparison...... the effect of Chl a, but the present study shows that too much food may cause reduced growth of mussels in eutrophicated marine areas regardless of high or moderate salinity above about 10 psu....

  2. Too much food may cause reduced growth of blue mussels (Mytilus edulis) - Test of hypothesis and new 'high Chl a BEG-model'

    Science.gov (United States)

    Larsen, Poul S.; Lüskow, Florian; Riisgård, Hans Ulrik

    2018-04-01

    Growth of the blue mussel (Mytilus edulis) is closely related to the biomass of phytoplankton (expressed as concentration of chlorophyll a, Chl a), but the effect of too much food in eutrophicated areas has so far been overlooked. The hypothesis addressed in the present study suggests that high Chl a concentrations (> about 8 μg Chl a l-1) result in reduced growth because mussels are not evolutionarily adapted to utilize such high phytoplankton concentrations and to physiologically regulate the amount of ingested food in such a way that the growth rate remains high and constant. We first make a comparison of literature values for actually measured weight-specific growth rates (μ, % d-1) of small (20 to 25 mm) M. edulis, either grown in controlled laboratory experiments or in net bags in Danish waters, as a function of Chl a. A linear increase up to about μ = 8.3% d-1 at 8.1 μg Chl a l-1 fits the "standard BEG-model" after which a marked decrease takes place, and this supports the hypothesis. A "high Chl a BEG-model", applicable to newly settled post-metamorphic and small juvenile (non-spawning) mussels in eutrophicated Danish and other temperate waters, is developed and tested, and new data from a case study in which the growth of mussels in net bags was measured along a Chl a gradient are presented. Finally, we discuss the phenomenon of reduced growth of mussels in eutrophicated areas versus a possible impact of low salinity. It is concluded that it is difficult to separate the effect of salinity from the effect of Chl a, but the present study shows that too much food may cause reduced growth of mussels in eutrophicated marine areas regardless of high or moderate salinity above about 10 psu.

  3. Herb-drug interactions: challenges and opportunities for improved predictions.

    Science.gov (United States)

    Brantley, Scott J; Argikar, Aneesh A; Lin, Yvonne S; Nagar, Swati; Paine, Mary F

    2014-03-01

    Supported by a usage history that predates written records and the perception that "natural" ensures safety, herbal products have increasingly been incorporated into Western health care. Consumers often self-administer these products concomitantly with conventional medications without informing their health care provider(s). Such herb-drug combinations can produce untoward effects when the herbal product perturbs the activity of drug metabolizing enzymes and/or transporters. Despite increasing recognition of these types of herb-drug interactions, a standard system for interaction prediction and evaluation is nonexistent. Consequently, the mechanisms underlying herb-drug interactions remain an understudied area of pharmacotherapy. Evaluation of herbal product interaction liability is challenging due to variability in herbal product composition, uncertainty of the causative constituents, and often scant knowledge of causative constituent pharmacokinetics. These limitations are confounded further by the varying perspectives concerning herbal product regulation. Systematic evaluation of herbal product drug interaction liability, as is routine for new drugs under development, necessitates identifying individual constituents from herbal products and characterizing the interaction potential of such constituents. Integration of this information into in silico models that estimate the pharmacokinetics of individual constituents should facilitate prospective identification of herb-drug interactions. These concepts are highlighted with the exemplar herbal products milk thistle and resveratrol. Implementation of this methodology should help provide definitive information to both consumers and clinicians about the risk of adding herbal products to conventional pharmacotherapeutic regimens.

  4. Herb–Drug Interactions: Challenges and Opportunities for Improved Predictions

    Science.gov (United States)

    Brantley, Scott J.; Argikar, Aneesh A.; Lin, Yvonne S.; Nagar, Swati

    2014-01-01

    Supported by a usage history that predates written records and the perception that “natural” ensures safety, herbal products have increasingly been incorporated into Western health care. Consumers often self-administer these products concomitantly with conventional medications without informing their health care provider(s). Such herb–drug combinations can produce untoward effects when the herbal product perturbs the activity of drug metabolizing enzymes and/or transporters. Despite increasing recognition of these types of herb–drug interactions, a standard system for interaction prediction and evaluation is nonexistent. Consequently, the mechanisms underlying herb–drug interactions remain an understudied area of pharmacotherapy. Evaluation of herbal product interaction liability is challenging due to variability in herbal product composition, uncertainty of the causative constituents, and often scant knowledge of causative constituent pharmacokinetics. These limitations are confounded further by the varying perspectives concerning herbal product regulation. Systematic evaluation of herbal product drug interaction liability, as is routine for new drugs under development, necessitates identifying individual constituents from herbal products and characterizing the interaction potential of such constituents. Integration of this information into in silico models that estimate the pharmacokinetics of individual constituents should facilitate prospective identification of herb–drug interactions. These concepts are highlighted with the exemplar herbal products milk thistle and resveratrol. Implementation of this methodology should help provide definitive information to both consumers and clinicians about the risk of adding herbal products to conventional pharmacotherapeutic regimens. PMID:24335390

  5. Temperature prediction model of asphalt pavement in cold regions based on an improved BP neural network

    International Nuclear Information System (INIS)

    Xu, Bo; Dan, Han-Cheng; Li, Liang

    2017-01-01

    Highlights: • Pavement temperature prediction model is presented with improved BP neural network. • Dynamic and static methods are presented to predict pavement temperature. • Pavement temperature can be excellently predicted in next 3 h. - Abstract: Ice cover on pavement threatens traffic safety, and pavement temperature is the main factor used to determine whether the wet pavement is icy or not. In this paper, a temperature prediction model of the pavement in winter is established by introducing an improved Back Propagation (BP) neural network model. Before the application of the BP neural network model, many efforts were made to eliminate chaos and determine the regularity of temperature on the pavement surface (e.g., analyze the regularity of diurnal and monthly variations of pavement temperature). New dynamic and static prediction methods are presented by improving the algorithms to intelligently overcome the prediction inaccuracy at the change point of daily temperature. Furthermore, some scenarios have been compared for different dates and road sections to verify the reliability of the prediction model. According to the analysis results, the daily pavement temperatures can be accurately predicted for the next 3 h from the time of prediction by combining the dynamic and static prediction methods. The presented method in this paper can provide technical references for temperature prediction of the pavement and the development of an early-warning system for icy pavements in cold regions.

  6. Improving the reliability of fishery predictions under climate change

    DEFF Research Database (Denmark)

    Brander, Keith

    2015-01-01

    The increasing number of publications assessing impacts of climate change on marine ecosystems and fisheries attests to rising scientific and public interest. A selection of recent papers, dealing more with biological than social and economic aspects, is reviewed here, with particular attention...... to the reliability of projections of climate impacts on future fishery yields. The 2014 Intergovernmental Panel on Climate Change (IPCC) report expresses high confidence in projections that mid- and high-latitude fish catch potential will increase by 2050 and medium confidence that low-latitude catch potential...... understanding of climate impacts, such as how to improve coupled models from physics to fish and how to strengthen confidence in analysis of time series...

  7. Verification and improvement of predictive algorithms for radionuclide migration

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Miller, C.W.; Remer, J.S.

    1984-01-01

    This research addresses issues relevant to numerical simulation and prediction of migration of radionuclides in the environment of nuclear waste repositories. Specific issues investigated are the adequacy of current numerical codes in simulating geochemical interactions affecting radionuclide migration, the level of complexity required in chemical algorithms of transport models, and the validity of the constant-k/sub D/ concept in chemical transport modeling. An initial survey of the literature led to the conclusion that existing numerical codes did not encompass the full range of chemical and physical phenomena influential in radionuclide migration. Studies of chemical algorithms have been conducted within the framework of a one-dimensional numerical code that simulates the transport of chemically reacting solutes in a saturated porous medium. The code treats transport by dispersion/diffusion and advection, and equilibrium-controlled proceses of interphase mass transfer, complexation in the aqueous phase, pH variation, and precipitation/dissolution of secondary solids. Irreversible, time-dependent dissolution of solid phases during transport can be treated. Mass action, transport, and sorptive site constraint equations are expressed in differential/algebraic form and are solved simultaneously. Simulations using the code show that use of the constant-k/sub D/ concept can produce unreliable results in geochemical transport modeling. Applications to a field test and laboratory analogs of a nuclear waste repository indicate that a thermodynamically based simulator of chemical transport can successfully mimic real processes provided that operative chemical mechanisms and associated data have been correctly identified and measured, and have been incorporated in the simulator. 17 references, 10 figures

  8. Improving Radar QPE's in Complex Terrain for Improved Flash Flood Monitoring and Prediction

    Science.gov (United States)

    Cifelli, R.; Streubel, D. P.; Reynolds, D.

    2010-12-01

    Quantitative Precipitation Estimation (QPE) is extremely challenging in regions of complex terrain due to a combination of issues related to sampling. In particular, radar beams are often blocked or scan above the liquid precipitation zone while rain gauge density is often too low to properly characterize the spatial distribution of precipitation. Due to poor radar coverage, rain gauge networks are used by the National Weather Service (NWS) River Forecast Centers as the principal source for QPE across the western U.S. The California Nevada River Forecast Center (CNRFC) uses point rainfall measurements and historical rainfall runoff relationships to derive river stage forecasts. The point measurements are interpolated to a 4 km grid using Parameter-elevation Regressions on Independent Slopes Model (PRISM) data to develop a gridded 6-hour QPE product (hereafter referred to as RFC QPE). Local forecast offices can utilize the Multi-sensor Precipitation Estimator (MPE) software to improve local QPE’s and thus local flash flood monitoring and prediction. MPE uses radar and rain gauge data to develop a combined QPE product at 1-hour intervals. The rain gauge information is used to bias correct the radar precipitation estimates so that, in situations where the rain gauge density and radar coverage are adequate, MPE can take advantage of the spatial coverage of the radar and the “ground truth” of the rain gauges to provide an accurate QPE. The MPE 1-hour QPE analysis should provide better spatial and temporal resolution for short duration hydrologic events as compared to 6-hour analyses. These hourly QPEs are then used to correct radar derived rain rates used by the Flash Flood Monitoring and Prediction (FFMP) software in forecast offices for issuance of flash flood warnings. Although widely used by forecasters across the eastern U.S., MPE is not used extensively by the NWS in the west. Part of the reason for the lack of use of MPE across the west is that there has

  9. Using synchronization in multi-model ensembles to improve prediction

    Science.gov (United States)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of

  10. Improving models to predict phenological responses to global change

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Andrew D. [Harvard College, Cambridge, MA (United States)

    2015-11-25

    The term phenology describes both the seasonal rhythms of plants and animals, and the study of these rhythms. Plant phenological processes, including, for example, when leaves emerge in the spring and change color in the autumn, are highly responsive to variation in weather (e.g. a warm vs. cold spring) as well as longer-term changes in climate (e.g. warming trends and changes in the timing and amount of rainfall). We conducted a study to investigate the phenological response of northern peatland communities to global change. Field work was conducted at the SPRUCE experiment in northern Minnesota, where we installed 10 digital cameras. Imagery from the cameras is being used to track shifts in plant phenology driven by elevated carbon dioxide and elevated temperature in the different SPRUCE experimental treatments. Camera imagery and derived products (“greenness”) is being posted in near-real time on a publicly available web page (http://phenocam.sr.unh.edu/webcam/gallery/). The images will provide a permanent visual record of the progression of the experiment over the next 10 years. Integrated with other measurements collected as part of the SPRUCE program, this study is providing insight into the degree to which phenology may mediate future shifts in carbon uptake and storage by peatland ecosystems. In the future, these data will be used to develop improved models of vegetation phenology, which will be tested against ground observations collected by a local collaborator.

  11. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    Science.gov (United States)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  12. Explicit Modeling of Ancestry Improves Polygenic Risk Scores and BLUP Prediction.

    Science.gov (United States)

    Chen, Chia-Yen; Han, Jiali; Hunter, David J; Kraft, Peter; Price, Alkes L

    2015-09-01

    Polygenic prediction using genome-wide SNPs can provide high prediction accuracy for complex traits. Here, we investigate the question of how to account for genetic ancestry when conducting polygenic prediction. We show that the accuracy of polygenic prediction in structured populations may be partly due to genetic ancestry. However, we hypothesized that explicitly modeling ancestry could improve polygenic prediction accuracy. We analyzed three GWAS of hair color (HC), tanning ability (TA), and basal cell carcinoma (BCC) in European Americans (sample size from 7,440 to 9,822) and considered two widely used polygenic prediction approaches: polygenic risk scores (PRSs) and best linear unbiased prediction (BLUP). We compared polygenic prediction without correction for ancestry to polygenic prediction with ancestry as a separate component in the model. In 10-fold cross-validation using the PRS approach, the R(2) for HC increased by 66% (0.0456-0.0755; P ancestry, which prevents ancestry effects from entering into each SNP effect and being overweighted. Surprisingly, explicitly modeling ancestry produces a similar improvement when using the BLUP approach, which fits all SNPs simultaneously in a single variance component and causes ancestry to be underweighted. We validate our findings via simulations, which show that the differences in prediction accuracy will increase in magnitude as sample sizes increase. In summary, our results show that explicitly modeling ancestry can be important in both PRS and BLUP prediction. © 2015 WILEY PERIODICALS, INC.

  13. Advanced Materials Test Methods for Improved Life Prediction of Turbine Engine Components

    National Research Council Canada - National Science Library

    Stubbs, Jack

    2000-01-01

    Phase I final report developed under SBIR contract for Topic # AF00-149, "Durability of Turbine Engine Materials/Advanced Material Test Methods for Improved Use Prediction of Turbine Engine Components...

  14. PSORTb 3.0: improved protein subcellular localization prediction with refined localization subcategories and predictive capabilities for all prokaryotes.

    Science.gov (United States)

    Yu, Nancy Y; Wagner, James R; Laird, Matthew R; Melli, Gabor; Rey, Sébastien; Lo, Raymond; Dao, Phuong; Sahinalp, S Cenk; Ester, Martin; Foster, Leonard J; Brinkman, Fiona S L

    2010-07-01

    PSORTb has remained the most precise bacterial protein subcellular localization (SCL) predictor since it was first made available in 2003. However, the recall needs to be improved and no accurate SCL predictors yet make predictions for archaea, nor differentiate important localization subcategories, such as proteins targeted to a host cell or bacterial hyperstructures/organelles. Such improvements should preferably be encompassed in a freely available web-based predictor that can also be used as a standalone program. We developed PSORTb version 3.0 with improved recall, higher proteome-scale prediction coverage, and new refined localization subcategories. It is the first SCL predictor specifically geared for all prokaryotes, including archaea and bacteria with atypical membrane/cell wall topologies. It features an improved standalone program, with a new batch results delivery system complementing its web interface. We evaluated the most accurate SCL predictors using 5-fold cross validation plus we performed an independent proteomics analysis, showing that PSORTb 3.0 is the most accurate but can benefit from being complemented by Proteome Analyst predictions. http://www.psort.org/psortb (download open source software or use the web interface). psort-mail@sfu.ca Supplementary data are available at Bioinformatics online.

  15. Clonal Evaluation of Prostate Cancer by ERG/SPINK1 Status to Improve Prognosis Prediction

    Science.gov (United States)

    2017-12-01

    19 NIH Exploiting drivers of androgen receptor signaling negative prostate cancer for precision medicine Goal(s): Identify novel potential drivers...AWARD NUMBER: W81XWH-14-1-0466 TITLE: Clonal evaluation of prostate cancer by ERG/SPINK1 status to improve prognosis prediction PRINCIPAL...Sept 2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Clonal Evaluation of Prostate Cancer by ERG/SPINK1 Status to Improve Prognosis Prediction 5b

  16. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Peng Lu

    2018-01-01

    Full Text Available Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively.

  17. Improved understanding of physics processes in pedestal structure, leading to improved predictive capability for ITER

    International Nuclear Information System (INIS)

    Groebner, R.J.; Snyder, P.B.; Leonard, A.W.; Chang, C.S.; Maingi, R.; Boyle, D.P.; Diallo, A.; Hughes, J.W.; Davis, E.M.; Ernst, D.R.; Landreman, M.; Xu, X.Q.; Boedo, J.A.; Cziegler, I.; Diamond, P.H.; Eldon, D.P.; Callen, J.D.; Canik, J.M.; Elder, J.D.; Fulton, D.P.

    2013-01-01

    Joint experiment/theory/modelling research has led to increased confidence in predictions of the pedestal height in ITER. This work was performed as part of a US Department of Energy Joint Research Target in FY11 to identify physics processes that control the H-mode pedestal structure. The study included experiments on C-Mod, DIII-D and NSTX as well as interpretation of experimental data with theory-based modelling codes. This work provides increased confidence in the ability of models for peeling–ballooning stability, bootstrap current, pedestal width and pedestal height scaling to make correct predictions, with some areas needing further work also being identified. A model for pedestal pressure height has made good predictions in existing machines for a range in pressure of a factor of 20. This provides a solid basis for predicting the maximum pedestal pressure height in ITER, which is found to be an extrapolation of a factor of 3 beyond the existing data set. Models were studied for a number of processes that are proposed to play a role in the pedestal n e and T e profiles. These processes include neoclassical transport, paleoclassical transport, electron temperature gradient turbulence and neutral fuelling. All of these processes may be important, with the importance being dependent on the plasma regime. Studies with several electromagnetic gyrokinetic codes show that the gradients in and on top of the pedestal can drive a number of instabilities. (paper)

  18. Spatial-temporal dynamics of NDVI and Chl-a concentration from 1998 to 2009 in the East coastal zone of China: integrating terrestrial and oceanic components.

    Science.gov (United States)

    Hou, Xiyong; Li, Mingjie; Gao, Meng; Yu, Liangju; Bi, Xiaoli

    2013-01-01

    Annual normalized difference vegetation index (NDVI) and chlorophyll-a (Chl-a) concentration are the most important large-scale indicators of terrestrial and oceanic ecosystem net primary productivity. In this paper, the Sea-viewing Wide Field-of-view Sensor level 3 standard mapped image annual products from 1998 to 2009 are used to study the spatial-temporal characters of terrestrial NDVI and oceanic Chl-a concentration on two sides of the coastline of China by using the methods of mean value (M), coefficient of variation (CV), the slope of unary linear regression model (Slope), and the Hurst index (H). In detail, we researched and analyzed the spatial-temporal dynamics, the longitudinal zonality and latitudinal zonality, the direction, intensity, and persistency of historical changes. The results showed that: (1) spatial patterns of M and CV between NDVI and Chl-a concentration from 1998 to 2009 were very different. The dynamic variation of terrestrial NDVI was much mild, while the variation of oceanic Chl-a concentration was relatively much larger; (2) distinct longitudinal zonality was found for Chl-a concentration and NDVI due to their hypersensitivity to the distance to shoreline, and strong latitudinal zonality existed for Chl-a concentration while terrestrial NDVI had a very weak latitudinal zonality; (3) overall, the NDVI showed a slight decreasing trend while the Chl-a concentration showed a significant increasing trend in the past 12 years, and both of them exhibit strong self-similarity and long-range dependence which indicates opposite future trends between land and ocean.

  19. Improved prediction of genetic predisposition to psychiatric disorders using genomic feature best linear unbiased prediction models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Demontis, Ditte; Børglum, Anders

    is enriched for causal variants. Here we apply the GFBLUP model to a small schizophrenia case-control study to test the promise of this model on psychiatric disorders, and hypothesize that the performance will be increased when applying the model to a larger ADHD case-control study if the genomic feature...... contains the causal variants. Materials and Methods: The schizophrenia study consisted of 882 controls and 888 schizophrenia cases genotyped for 520,000 SNPs. The ADHD study contained 25,954 controls and 16,663 ADHD cases with 8,4 million imputed genotypes. Results: The predictive ability for schizophrenia.......6% for the null model). Conclusion: The improvement in predictive ability for schizophrenia was marginal, however, greater improvement is expected for the larger ADHD data....

  20. Improvement of gas entrainment prediction method. Introduction of surface tension effect

    International Nuclear Information System (INIS)

    Ito, Kei; Sakai, Takaaki; Ohshima, Hiroyuki; Uchibori, Akihiro; Eguchi, Yuzuru; Monji, Hideaki; Xu, Yongze

    2010-01-01

    A gas entrainment (GE) prediction method has been developed to establish design criteria for the large-scale sodium-cooled fast reactor (JSFR) systems. The prototype of the GE prediction method was already confirmed to give reasonable gas core lengths by simple calculation procedures. However, for simplification, the surface tension effects were neglected. In this paper, the evaluation accuracy of gas core lengths is improved by introducing the surface tension effects into the prototype GE prediction method. First, the mechanical balance between gravitational, centrifugal, and surface tension forces is considered. Then, the shape of a gas core tip is approximated by a quadratic function. Finally, using the approximated gas core shape, the authors determine the gas core length satisfying the mechanical balance. This improved GE prediction method is validated by analyzing the gas core lengths observed in simple experiments. Results show that the analytical gas core lengths calculated by the improved GE prediction method become shorter in comparison to the prototype GE prediction method, and are in good agreement with the experimental data. In addition, the experimental data under different temperature and surfactant concentration conditions are reproduced by the improved GE prediction method. (author)

  1. Neurophysiology in preschool improves behavioral prediction of reading ability throughout primary school.

    Science.gov (United States)

    Maurer, Urs; Bucher, Kerstin; Brem, Silvia; Benz, Rosmarie; Kranz, Felicitas; Schulz, Enrico; van der Mark, Sanne; Steinhausen, Hans-Christoph; Brandeis, Daniel

    2009-08-15

    More struggling readers could profit from additional help at the beginning of reading acquisition if dyslexia prediction were more successful. Currently, prediction is based only on behavioral assessment of early phonological processing deficits associated with dyslexia, but it might be improved by adding brain-based measures. In a 5-year longitudinal study of children with (n = 21) and without (n = 23) familial risk for dyslexia, we tested whether neurophysiological measures of automatic phoneme and tone deviance processing obtained in kindergarten would improve prediction of reading over behavioral measures alone. Together, neurophysiological and behavioral measures obtained in kindergarten significantly predicted reading in school. Particularly the late mismatch negativity measure that indicated hemispheric lateralization of automatic phoneme processing improved prediction of reading ability over behavioral measures. It was also the only significant predictor for long-term reading success in fifth grade. Importantly, this result also held for the subgroup of children at familial risk. The results demonstrate that brain-based measures of processing deficits associated with dyslexia improve prediction of reading and thus may be further evaluated to complement clinical practice of dyslexia prediction, especially in targeted populations, such as children with a familial risk.

  2. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  3. TMDIM: an improved algorithm for the structure prediction of transmembrane domains of bitopic dimers

    Science.gov (United States)

    Cao, Han; Ng, Marcus C. K.; Jusoh, Siti Azma; Tai, Hio Kuan; Siu, Shirley W. I.

    2017-09-01

    α-Helical transmembrane proteins are the most important drug targets in rational drug development. However, solving the experimental structures of these proteins remains difficult, therefore computational methods to accurately and efficiently predict the structures are in great demand. We present an improved structure prediction method TMDIM based on Park et al. (Proteins 57:577-585, 2004) for predicting bitopic transmembrane protein dimers. Three major algorithmic improvements are introduction of the packing type classification, the multiple-condition decoy filtering, and the cluster-based candidate selection. In a test of predicting nine known bitopic dimers, approximately 78% of our predictions achieved a successful fit (RMSD PHP, MySQL and Apache, with all major browsers supported.

  4. Parametric Bayesian priors and better choice of negative examples improve protein function prediction.

    Science.gov (United States)

    Youngs, Noah; Penfold-Brown, Duncan; Drew, Kevin; Shasha, Dennis; Bonneau, Richard

    2013-05-01

    Computational biologists have demonstrated the utility of using machine learning methods to predict protein function from an integration of multiple genome-wide data types. Yet, even the best performing function prediction algorithms rely on heuristics for important components of the algorithm, such as choosing negative examples (proteins without a given function) or determining key parameters. The improper choice of negative examples, in particular, can hamper the accuracy of protein function prediction. We present a novel approach for choosing negative examples, using a parameterizable Bayesian prior computed from all observed annotation data, which also generates priors used during function prediction. We incorporate this new method into the GeneMANIA function prediction algorithm and demonstrate improved accuracy of our algorithm over current top-performing function prediction methods on the yeast and mouse proteomes across all metrics tested. Code and Data are available at: http://bonneaulab.bio.nyu.edu/funcprop.html

  5. Predictability of extreme weather events for NE U.S.: improvement of the numerical prediction using a Bayesian regression approach

    Science.gov (United States)

    Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.

    2015-12-01

    Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be

  6. Can decadal climate predictions be improved by ocean ensemble dispersion filtering?

    Science.gov (United States)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-12-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http

  7. Scale invariance properties of intracerebral EEG improve seizure prediction in mesial temporal lobe epilepsy.

    Directory of Open Access Journals (Sweden)

    Kais Gadhoumi

    Full Text Available Although treatment for epilepsy is available and effective for nearly 70 percent of patients, many remain in need of new therapeutic approaches. Predicting the impending seizures in these patients could significantly enhance their quality of life if the prediction performance is clinically practical. In this study, we investigate the improvement of the performance of a seizure prediction algorithm in 17 patients with mesial temporal lobe epilepsy by means of a novel measure. Scale-free dynamics of the intracerebral EEG are quantified through robust estimates of the scaling exponents--the first cumulants--derived from a wavelet leader and bootstrap based multifractal analysis. The cumulants are investigated for the discriminability between preictal and interictal epochs. The performance of our recently published patient-specific seizure prediction algorithm is then out-of-sample tested on long-lasting data using combinations of cumulants and state similarity measures previously introduced. By using the first cumulant in combination with state similarity measures, up to 13 of 17 patients had seizures predicted above chance with clinically practical levels of sensitivity (80.5% and specificity (25.1% of total time under warning for prediction horizons above 25 min. These results indicate that the scale-free dynamics of the preictal state are different from those of the interictal state. Quantifiers of these dynamics may carry a predictive power that can be used to improve seizure prediction performance.

  8. Accounting for genetic architecture improves sequence based genomic prediction for a Drosophila fitness trait.

    Science.gov (United States)

    Ober, Ulrike; Huang, Wen; Magwire, Michael; Schlather, Martin; Simianer, Henner; Mackay, Trudy F C

    2015-01-01

    The ability to predict quantitative trait phenotypes from molecular polymorphism data will revolutionize evolutionary biology, medicine and human biology, and animal and plant breeding. Efforts to map quantitative trait loci have yielded novel insights into the biology of quantitative traits, but the combination of individually significant quantitative trait loci typically has low predictive ability. Utilizing all segregating variants can give good predictive ability in plant and animal breeding populations, but gives little insight into trait biology. Here, we used the Drosophila Genetic Reference Panel to perform both a genome wide association analysis and genomic prediction for the fitness-related trait chill coma recovery time. We found substantial total genetic variation for chill coma recovery time, with a genetic architecture that differs between males and females, a small number of molecular variants with large main effects, and evidence for epistasis. Although the top additive variants explained 36% (17%) of the genetic variance among lines in females (males), the predictive ability using genomic best linear unbiased prediction and a relationship matrix using all common segregating variants was very low for females and zero for males. We hypothesized that the low predictive ability was due to the mismatch between the infinitesimal genetic architecture assumed by the genomic best linear unbiased prediction model and the true genetic architecture of chill coma recovery time. Indeed, we found that the predictive ability of the genomic best linear unbiased prediction model is markedly improved when we combine quantitative trait locus mapping with genomic prediction by only including the top variants associated with main and epistatic effects in the relationship matrix. This trait-associated prediction approach has the advantage that it yields biologically interpretable prediction models.

  9. Accounting for genetic architecture improves sequence based genomic prediction for a Drosophila fitness trait.

    Directory of Open Access Journals (Sweden)

    Ulrike Ober

    Full Text Available The ability to predict quantitative trait phenotypes from molecular polymorphism data will revolutionize evolutionary biology, medicine and human biology, and animal and plant breeding. Efforts to map quantitative trait loci have yielded novel insights into the biology of quantitative traits, but the combination of individually significant quantitative trait loci typically has low predictive ability. Utilizing all segregating variants can give good predictive ability in plant and animal breeding populations, but gives little insight into trait biology. Here, we used the Drosophila Genetic Reference Panel to perform both a genome wide association analysis and genomic prediction for the fitness-related trait chill coma recovery time. We found substantial total genetic variation for chill coma recovery time, with a genetic architecture that differs between males and females, a small number of molecular variants with large main effects, and evidence for epistasis. Although the top additive variants explained 36% (17% of the genetic variance among lines in females (males, the predictive ability using genomic best linear unbiased prediction and a relationship matrix using all common segregating variants was very low for females and zero for males. We hypothesized that the low predictive ability was due to the mismatch between the infinitesimal genetic architecture assumed by the genomic best linear unbiased prediction model and the true genetic architecture of chill coma recovery time. Indeed, we found that the predictive ability of the genomic best linear unbiased prediction model is markedly improved when we combine quantitative trait locus mapping with genomic prediction by only including the top variants associated with main and epistatic effects in the relationship matrix. This trait-associated prediction approach has the advantage that it yields biologically interpretable prediction models.

  10. A GIS Approach to Wind,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea

    Science.gov (United States)

    Mirkhalili, Seyedhamzeh

    2016-07-01

    Chlorophyll is an extremely important bio-molecule, critical in photosynthesis, which allows plants to absorb energy from light. At the base of the ocean food web are single-celled algae and other plant-like organisms known as Phytoplankton. Like plants on land, Phytoplankton use chlorophyll and other light-harvesting pigments to carry out photosynthesis. Where Phytoplankton grow depends on available sunlight, temperature, and nutrient levels. In this research a GIS Approach using ARCGIS software and QuikSCAT satellite data was applied to visualize WIND,SST(Sea Surface Temperature) and CHL(Chlorophyll) variations in the Caspian Sea.Results indicate that increase in chlorophyll concentration in coastal areas is primarily driven by terrestrial nutrients and does not imply that warmer SST will lead to an increase in chlorophyll concentration and consequently Phytoplankton abundance.

  11. Integrating Chlorophyll fapar and Nadir Photochemical Reflectance Index from EO-1/Hyperion to Predict Cornfield Daily Gross Primary Production

    Science.gov (United States)

    Zhang, Qingyuan; Middleton, Elizabeth M.; Cheng, Yen-Ben; Huemmrich, K. Fred; Cook, Bruce D.; Corp, Lawrence A.; Kustas, William P.; Russ, Andrew L.; Prueger, John H.; Yao, Tian

    2016-01-01

    The concept of light use efficiency (Epsilon) and the concept of fraction of photosynthetically active ration (PAR) absorbed for vegetation photosynthesis (PSN), i.e., fAPAR (sub PSN), have been widely utilized to estimate vegetation gross primary productivity (GPP). It has been demonstrated that the photochemical reflectance index (PRI) is empirically related to e. An experimental US Department of Agriculture (USDA) cornfield in Maryland was selected as our study field. We explored the potential of integrating fAPAR(sub chl) (defined as the fraction of PAR absorbed by chlorophyll) and nadir PRI (PRI(sub nadir)) to predict cornfield daily GPP. We acquired nadir or near-nadir EO-1/Hyperion satellite images that covered the cornfield and took nadir in-situ field spectral measurements. Those data were used to derive the PRI(sub nadir) and fAPAR (sub chl). The fAPAR (sub chl) is retrieved with the advanced radiative transfer model PROSAIL2 and the Metropolis approach, a type of Markov Chain Monte Carlo (MCMC) estimation procedure. We define chlorophyll light use efficiency Epsilon (sub chl) as the ratio of vegetation GPP as measured by eddy covariance techniques to PAR absorbed by chlorophyll (Epsilon(sub chl) = GPP/APAR (sub chl). Daily Epsilon (sub chl) retrieved with the EO-1 Hyperion images was regressed with a linear equation of PRI (sub nadir) Epsilon (sub chl) = Alpha × PRI (sub nadir) + Beta). The satellite Epsilon(sub chl- PRI (sub nadir) linear relationship for the cornfield was implemented to develop an integrated daily GPP model [GPP = (Alpha × PRI(sub nadir) + Beta) × fAPAR (sub chl) × PAR], which was evaluated with fAPAR (sub chl) and PRI (sub nadir) retrieved from field measurements. Daily GPP estimated with this fAPAR (sub chl-) PRI (nadir) integration model was strongly correlated with the observed tower in-situ daily GPP (R(sup 2) = 0.93); with a root mean square error (RMSE) of 1.71 g C mol-(sup -1) PPFD and coefficient of variation (CV) of 16

  12. Can machine-learning improve cardiovascular risk prediction using routine clinical data?

    Science.gov (United States)

    Kai, Joe; Garibaldi, Jonathan M.; Qureshi, Nadeem

    2017-01-01

    Background Current approaches to predict cardiovascular risk fail to identify many people who would benefit from preventive treatment, while others receive unnecessary intervention. Machine-learning offers opportunity to improve accuracy by exploiting complex interactions between risk factors. We assessed whether machine-learning can improve cardiovascular risk prediction. Methods Prospective cohort study using routine clinical data of 378,256 patients from UK family practices, free from cardiovascular disease at outset. Four machine-learning algorithms (random forest, logistic regression, gradient boosting machines, neural networks) were compared to an established algorithm (American College of Cardiology guidelines) to predict first cardiovascular event over 10-years. Predictive accuracy was assessed by area under the ‘receiver operating curve’ (AUC); and sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) to predict 7.5% cardiovascular risk (threshold for initiating statins). Findings 24,970 incident cardiovascular events (6.6%) occurred. Compared to the established risk prediction algorithm (AUC 0.728, 95% CI 0.723–0.735), machine-learning algorithms improved prediction: random forest +1.7% (AUC 0.745, 95% CI 0.739–0.750), logistic regression +3.2% (AUC 0.760, 95% CI 0.755–0.766), gradient boosting +3.3% (AUC 0.761, 95% CI 0.755–0.766), neural networks +3.6% (AUC 0.764, 95% CI 0.759–0.769). The highest achieving (neural networks) algorithm predicted 4,998/7,404 cases (sensitivity 67.5%, PPV 18.4%) and 53,458/75,585 non-cases (specificity 70.7%, NPV 95.7%), correctly predicting 355 (+7.6%) more patients who developed cardiovascular disease compared to the established algorithm. Conclusions Machine-learning significantly improves accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment, while avoiding unnecessary treatment of others

  13. An Improved Optimal Slip Ratio Prediction considering Tyre Inflation Pressure Changes

    Directory of Open Access Journals (Sweden)

    Guoxing Li

    2015-01-01

    Full Text Available The prediction of optimal slip ratio is crucial to vehicle control systems. Many studies have verified there is a definitive impact of tyre pressure change on the optimal slip ratio. However, the existing method of optimal slip ratio prediction has not taken into account the influence of tyre pressure changes. By introducing a second-order factor, an improved optimal slip ratio prediction considering tyre inflation pressure is proposed in this paper. In order to verify and evaluate the performance of the improved prediction, a cosimulation platform is developed by using MATLAB/Simulink and CarSim software packages, achieving a comprehensive simulation study of vehicle braking performance cooperated with an ABS controller. The simulation results show that the braking distances and braking time under different tyre pressures and initial braking speeds are effectively shortened with the improved prediction of optimal slip ratio. When the tyre pressure is slightly lower than the nominal pressure, the difference of braking performances between original optimal slip ratio and improved optimal slip ratio is the most obvious.

  14. Skill of Predicting Heavy Rainfall Over India: Improvement in Recent Years Using UKMO Global Model

    Science.gov (United States)

    Sharma, Kuldeep; Ashrit, Raghavendra; Bhatla, R.; Mitra, A. K.; Iyengar, G. R.; Rajagopal, E. N.

    2017-11-01

    The quantitative precipitation forecast (QPF) performance for heavy rains is still a challenge, even for the most advanced state-of-art high-resolution Numerical Weather Prediction (NWP) modeling systems. This study aims to evaluate the performance of UK Met Office Unified Model (UKMO) over India for prediction of high rainfall amounts (>2 and >5 cm/day) during the monsoon period (JJAS) from 2007 to 2015 in short range forecast up to Day 3. Among the various modeling upgrades and improvements in the parameterizations during this period, the model horizontal resolution has seen an improvement from 40 km in 2007 to 17 km in 2015. Skill of short range rainfall forecast has improved in UKMO model in recent years mainly due to increased horizontal and vertical resolution along with improved physics schemes. Categorical verification carried out using the four verification metrics, namely, probability of detection (POD), false alarm ratio (FAR), frequency bias (Bias) and Critical Success Index, indicates that QPF has improved by >29 and >24% in case of POD and FAR. Additionally, verification scores like EDS (Extreme Dependency Score), EDI (Extremal Dependence Index) and SEDI (Symmetric EDI) are used with special emphasis on verification of extreme and rare rainfall events. These scores also show an improvement by 60% (EDS) and >34% (EDI and SEDI) during the period of study, suggesting an improved skill of predicting heavy rains.

  15. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  16. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  17. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  18. Improved Trust Prediction in Business Environments by Adaptive Neuro Fuzzy Inference Systems

    Directory of Open Access Journals (Sweden)

    Ali Azadeh

    2015-06-01

    Full Text Available Trust prediction turns out to be an important challenge when cooperation among intelligent agents with an impression of trust in their mind, is investigated. In other words, predicting trust values for future time slots help partners to identify the probability of continuing a relationship. Another important case to be considered is the context of trust, i.e. the services and business commitments for which a relationship is defined. Hence, intelligent agents should focus on improving trust to provide a stable and confident context. Modelling of trust between collaborating parties seems to be an important component of the business intelligence strategy. In this regard, a set of metrics have been considered by which the value of confidence level for predicted trust values has been estimated. These metrics are maturity, distance and density (MD2. Prediction of trust for future mutual relationships among agents is a problem that is addressed in this study. We introduce a simulation-based model which utilizes linguistic variables to create various scenarios. Then, future trust values among agents are predicted by the concept of adaptive neuro-fuzzy inference system (ANFIS. Mean absolute percentage errors (MAPEs resulted from ANFIS are compared with confidence levels which are determined by applying MD2. Results determine the efficiency of MD2 for forecasting trust values. This is the first study that utilizes the concept of MD2 for improvement of business trust prediction.

  19. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  20. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  1. DNCON2: improved protein contact prediction using two-level deep convolutional neural networks.

    Science.gov (United States)

    Adhikari, Badri; Hou, Jie; Cheng, Jianlin

    2018-05-01

    Significant improvements in the prediction of protein residue-residue contacts are observed in the recent years. These contacts, predicted using a variety of coevolution-based and machine learning methods, are the key contributors to the recent progress in ab initio protein structure prediction, as demonstrated in the recent CASP experiments. Continuing the development of new methods to reliably predict contact maps is essential to further improve ab initio structure prediction. In this paper we discuss DNCON2, an improved protein contact map predictor based on two-level deep convolutional neural networks. It consists of six convolutional neural networks-the first five predict contacts at 6, 7.5, 8, 8.5 and 10 Å distance thresholds, and the last one uses these five predictions as additional features to predict final contact maps. On the free-modeling datasets in CASP10, 11 and 12 experiments, DNCON2 achieves mean precisions of 35, 50 and 53.4%, respectively, higher than 30.6% by MetaPSICOV on CASP10 dataset, 34% by MetaPSICOV on CASP11 dataset and 46.3% by Raptor-X on CASP12 dataset, when top L/5 long-range contacts are evaluated. We attribute the improved performance of DNCON2 to the inclusion of short- and medium-range contacts into training, two-level approach to prediction, use of the state-of-the-art optimization and activation functions, and a novel deep learning architecture that allows each filter in a convolutional layer to access all the input features of a protein of arbitrary length. The web server of DNCON2 is at http://sysbio.rnet.missouri.edu/dncon2/ where training and testing datasets as well as the predictions for CASP10, 11 and 12 free-modeling datasets can also be downloaded. Its source code is available at https://github.com/multicom-toolbox/DNCON2/. chengji@missouri.edu. Supplementary data are available at Bioinformatics online.

  2. Respiratory sinus arrhythmia reactivity to a sad film predicts depression symptom improvement and symptomatic trajectory.

    Science.gov (United States)

    Panaite, Vanessa; Hindash, Alexandra Cowden; Bylsma, Lauren M; Small, Brent J; Salomon, Kristen; Rottenberg, Jonathan

    2016-01-01

    Respiratory sinus arrhythmia (RSA) reactivity, an index of cardiac vagal tone, has been linked to self-regulation and the severity and course of depression (Rottenberg, 2007). Although initial data supports the proposition that RSA withdrawal during a sad film is a specific predictor of depression course (Fraguas, 2007; Rottenberg, 2005), the robustness and specificity of this finding are unclear. To provide a stronger test, RSA reactivity to three emotion films (happy, sad, fear) and to a more robust stressor, a speech task, were examined in currently depressed individuals (n=37), who were assessed for their degree of symptomatic improvement over 30weeks. Robust RSA reactivity to the sad film uniquely predicted overall symptom improvement over 30weeks. RSA reactivity to both sad and stressful stimuli predicted the speed and maintenance of symptomatic improvement. The current analyses provide the most robust support to date that RSA withdrawal to sad stimuli (but not stressful) has specificity in predicting the overall symptomatic improvement. In contrast, RSA reactivity to negative stimuli (both sad and stressful) predicted the trajectory of depression course. Patients' engagement with sad stimuli may be an important sign to attend to in therapeutic settings. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Science.gov (United States)

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  4. Improved model predictive control for high voltage quality in microgrid applications

    DEFF Research Database (Denmark)

    Dragicevic, T.; Al hasheem, Mohamed; Lu, M.

    2017-01-01

    This paper proposes an improvement of the finite control set model predictive control (FCS-MPC) strategy for enhancing the voltage regulation performance of a voltage source converter (VSC) used for standalone microgrid and uninterrupted power supply (UPS) applications. The modification is based...

  5. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    DEFF Research Database (Denmark)

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E

    2014-01-01

    OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  6. NOAA's Strategy to Improve Operational Weather Prediction Outlooks at Subseasonal Time Range

    Science.gov (United States)

    Schneider, T.; Toepfer, F.; Stajner, I.; DeWitt, D.

    2017-12-01

    NOAA is planning to extend operational global numerical weather prediction to sub-seasonal time range under the auspices of its Next Generation Global Prediction System (NGGPS) and Extended Range Outlook Programs. A unification of numerical prediction capabilities for weather and subseasonal to seasonal (S2S) timescales is underway at NOAA using the Finite Volume Cubed Sphere (FV3) dynamical core as the basis for the emerging unified system. This presentation will overview NOAA's strategic planning and current activities to improve prediction at S2S time-scales that are ongoing in response to the Weather Research and Forecasting Innovation Act of 2017, Section 201. Over the short-term, NOAA seeks to improve the operational capability through improvements to its ensemble forecast system to extend its range to 30 days using the new FV3 Global Forecast System model, and by using this system to provide reforecast and re-analyses. In parallel, work is ongoing to improve NOAA's operational product suite for 30 day outlooks for temperature, precipitation and extreme weather phenomena.

  7. Improved prediction of signal peptides: SignalP 3.0

    DEFF Research Database (Denmark)

    Bendtsen, Jannick Dyrløv; Nielsen, Henrik; von Heijne, G.

    2004-01-01

    We describe improvements of the currently most popular method for prediction of classically secreted proteins, SignalP. SignalP consists of two different predictors based on neural network and hidden Markov model algorithms, where both components have been updated. Motivated by the idea that the ...

  8. A two-stage approach for improved prediction of residue contact maps

    Directory of Open Access Journals (Sweden)

    Pollastri Gianluca

    2006-03-01

    Full Text Available Abstract Background Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure. Although improvements have occurred over the last years, the problem of accurately predicting residue contact maps from primary sequences is still largely unsolved. Among the reasons for this are the unbalanced nature of the problem (with far fewer examples of contacts than non-contacts, the formidable challenge of capturing long-range interactions in the maps, the intrinsic difficulty of mapping one-dimensional input sequences into two-dimensional output maps. In order to alleviate these problems and achieve improved contact map predictions, in this paper we split the task into two stages: the prediction of a map's principal eigenvector (PE from the primary sequence; the reconstruction of the contact map from the PE and primary sequence. Predicting the PE from the primary sequence consists in mapping a vector into a vector. This task is less complex than mapping vectors directly into two-dimensional matrices since the size of the problem is drastically reduced and so is the scale length of interactions that need to be learned. Results We develop architectures composed of ensembles of two-layered bidirectional recurrent neural networks to classify the components of the PE in 2, 3 and 4 classes from protein primary sequence, predicted secondary structure, and hydrophobicity interaction scales. Our predictor, tested on a non redundant set of 2171 proteins, achieves classification performances of up to 72.6%, 16% above a base-line statistical predictor. We design a system for the prediction of contact maps from the predicted PE. Our results show that predicting maps through the PE yields sizeable gains especially for long-range contacts which are particularly critical for accurate protein 3D reconstruction. The final predictor's accuracy on a non-redundant set of 327 targets is 35

  9. Biomarkers for predicting type 2 diabetes development-Can metabolomics improve on existing biomarkers?

    Directory of Open Access Journals (Sweden)

    Otto Savolainen

    Full Text Available The aim was to determine if metabolomics could be used to build a predictive model for type 2 diabetes (T2D risk that would improve prediction of T2D over current risk markers.Gas chromatography-tandem mass spectrometry metabolomics was used in a nested case-control study based on a screening sample of 64-year-old Caucasian women (n = 629. Candidate metabolic markers of T2D were identified in plasma obtained at baseline and the power to predict diabetes was tested in 69 incident cases occurring during 5.5 years follow-up. The metabolomics results were used as a standalone prediction model and in combination with established T2D predictive biomarkers for building eight T2D prediction models that were compared with each other based on their sensitivity and selectivity for predicting T2D.Established markers of T2D (impaired fasting glucose, impaired glucose tolerance, insulin resistance (HOMA, smoking, serum adiponectin alone, and in combination with metabolomics had the largest areas under the curve (AUC (0.794 (95% confidence interval [0.738-0.850] and 0.808 [0.749-0.867] respectively, with the standalone metabolomics model based on nine fasting plasma markers having a lower predictive power (0.657 [0.577-0.736]. Prediction based on non-blood based measures was 0.638 [0.565-0.711].Established measures of T2D risk remain the best predictor of T2D risk in this population. Additional markers detected using metabolomics are likely related to these measures as they did not enhance the overall prediction in a combined model.

  10. LocARNA-P: Accurate boundary prediction and improved detection of structural RNAs

    DEFF Research Database (Denmark)

    Will, Sebastian; Joshi, Tejal; Hofacker, Ivo L.

    2012-01-01

    Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing...... methods struggle with these goals because they rely on sequence-based multiple sequence alignments, which regularly misalign RNA structure and therefore do not support identification of structural similarities. To overcome this limitation, we compute columnwise and global reliabilities of alignments based...... on sequence and structure similarity; we refer to these structure-based alignment reliabilities as STARs. The columnwise STARs of alignments, or STAR profiles, provide a versatile tool for the manual and automatic analysis of ncRNAs. In particular, we improve the boundary prediction of the widely used nc...

  11. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  12. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L. Improved by Accounting for Linkage Disequilibrium

    Directory of Open Access Journals (Sweden)

    Guillaume P. Ramstein

    2016-04-01

    Full Text Available Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  13. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  14. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  15. Improving the accuracy of protein secondary structure prediction using structural alignment

    Directory of Open Access Journals (Sweden)

    Gallin Warren J

    2006-06-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has steadily improved over the past 30 years. Now many secondary structure prediction methods routinely achieve an accuracy (Q3 of about 75%. We believe this accuracy could be further improved by including structure (as opposed to sequence database comparisons as part of the prediction process. Indeed, given the large size of the Protein Data Bank (>35,000 sequences, the probability of a newly identified sequence having a structural homologue is actually quite high. Results We have developed a method that performs structure-based sequence alignments as part of the secondary structure prediction process. By mapping the structure of a known homologue (sequence ID >25% onto the query protein's sequence, it is possible to predict at least a portion of that query protein's secondary structure. By integrating this structural alignment approach with conventional (sequence-based secondary structure methods and then combining it with a "jury-of-experts" system to generate a consensus result, it is possible to attain very high prediction accuracy. Using a sequence-unique test set of 1644 proteins from EVA, this new method achieves an average Q3 score of 81.3%. Extensive testing indicates this is approximately 4–5% better than any other method currently available. Assessments using non sequence-unique test sets (typical of those used in proteome annotation or structural genomics indicate that this new method can achieve a Q3 score approaching 88%. Conclusion By using both sequence and structure databases and by exploiting the latest techniques in machine learning it is possible to routinely predict protein secondary structure with an accuracy well above 80%. A program and web server, called PROTEUS, that performs these secondary structure predictions is accessible at http://wishart.biology.ualberta.ca/proteus. For high throughput or batch sequence analyses, the PROTEUS programs

  16. Improving Multi-Sensor Drought Monitoring, Prediction and Recovery Assessment Using Gravimetry Information

    Science.gov (United States)

    Aghakouchak, Amir; Tourian, Mohammad J.

    2015-04-01

    Development of reliable drought monitoring, prediction and recovery assessment tools are fundamental to water resources management. This presentation focuses on how gravimetry information can improve drought assessment. First, we provide an overview of the Global Integrated Drought Monitoring and Prediction System (GIDMaPS) which offers near real-time drought information using remote sensing observations and model simulations. Then, we present a framework for integration of satellite gravimetry information for improving drought prediction and recovery assessment. The input data include satellite-based and model-based precipitation, soil moisture estimates and equivalent water height. Previous studies show that drought assessment based on one single indicator may not be sufficient. For this reason, GIDMaPS provides drought information based on multiple drought indicators including Standardized Precipitation Index (SPI), Standardized Soil Moisture Index (SSI) and the Multivariate Standardized Drought Index (MSDI) which combines SPI and SSI probabilistically. MSDI incorporates the meteorological and agricultural drought conditions and provides composite multi-index drought information for overall characterization of droughts. GIDMaPS includes a seasonal prediction component based on a statistical persistence-based approach. The prediction component of GIDMaPS provides the empirical probability of drought for different severity levels. In this presentation we present a new component in which the drought prediction information based on SPI, SSI and MSDI are conditioned on equivalent water height obtained from the Gravity Recovery and Climate Experiment (GRACE). Using a Bayesian approach, GRACE information is used to evaluate persistence of drought. Finally, the deficit equivalent water height based on GRACE is used for assessing drought recovery. In this presentation, both monitoring and prediction components of GIDMaPS will be discussed, and the results from 2014

  17. Chronic mild stress impairs latent inhibition and induces region-specific neural activation in CHL1-deficient mice, a mouse model of schizophrenia.

    Science.gov (United States)

    Buhusi, Mona; Obray, Daniel; Guercio, Bret; Bartlett, Mitchell J; Buhusi, Catalin V

    2017-08-30

    Schizophrenia is a neurodevelopmental disorder characterized by abnormal processing of information and attentional deficits. Schizophrenia has a high genetic component but is precipitated by environmental factors, as proposed by the 'two-hit' theory of schizophrenia. Here we compared latent inhibition as a measure of learning and attention, in CHL1-deficient mice, an animal model of schizophrenia, and their wild-type littermates, under no-stress and chronic mild stress conditions. All unstressed mice as well as the stressed wild-type mice showed latent inhibition. In contrast, CHL1-deficient mice did not show latent inhibition after exposure to chronic stress. Differences in neuronal activation (c-Fos-positive cell counts) were noted in brain regions associated with latent inhibition: Neuronal activation in the prelimbic/infralimbic cortices and the nucleus accumbens shell was affected solely by stress. Neuronal activation in basolateral amygdala and ventral hippocampus was affected independently by stress and genotype. Most importantly, neural activation in nucleus accumbens core was affected by the interaction between stress and genotype. These results provide strong support for a 'two-hit' (genes x environment) effect on latent inhibition in CHL1-deficient mice, and identify CHL1-deficient mice as a model of schizophrenia-like learning and attention impairments. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. GUN4-Porphyrin Complexes Bind the ChlH/GUN5 Subunit of Mg-Chelatase and Promote Chlorophyll Biosynthesis in Arabidopsis[W

    Science.gov (United States)

    Adhikari, Neil D.; Froehlich, John E.; Strand, Deserah D.; Buck, Stephanie M.; Kramer, David M.; Larkin, Robert M.

    2011-01-01

    The GENOMES UNCOUPLED4 (GUN4) protein stimulates chlorophyll biosynthesis by activating Mg-chelatase, the enzyme that commits protoporphyrin IX to chlorophyll biosynthesis. This stimulation depends on GUN4 binding the ChlH subunit of Mg-chelatase and the porphyrin substrate and product of Mg-chelatase. After binding porphyrins, GUN4 associates more stably with chloroplast membranes and was proposed to promote interactions between ChlH and chloroplast membranes—the site of Mg-chelatase activity. GUN4 was also proposed to attenuate the production of reactive oxygen species (ROS) by binding and shielding light-exposed porphyrins from collisions with O2. To test these proposals, we first engineered Arabidopsis thaliana plants that express only porphyrin binding–deficient forms of GUN4. Using these transgenic plants and particular mutants, we found that the porphyrin binding activity of GUN4 and Mg-chelatase contribute to the accumulation of chlorophyll, GUN4, and Mg-chelatase subunits. Also, we found that the porphyrin binding activity of GUN4 and Mg-chelatase affect the associations of GUN4 and ChlH with chloroplast membranes and have various effects on the expression of ROS-inducible genes. Based on our findings, we conclude that ChlH and GUN4 use distinct mechanisms to associate with chloroplast membranes and that mutant alleles of GUN4 and Mg-chelatase genes cause sensitivity to intense light by a mechanism that is potentially complex. PMID:21467578

  19. An improved method for predicting brittleness of rocks via well logs in tight oil reservoirs

    Science.gov (United States)

    Wang, Zhenlin; Sun, Ting; Feng, Cheng; Wang, Wei; Han, Chuang

    2018-06-01

    There can be no industrial oil production in tight oil reservoirs until fracturing is undertaken. Under such conditions, the brittleness of the rocks is a very important factor. However, it has so far been difficult to predict. In this paper, the selected study area is the tight oil reservoirs in Lucaogou formation, Permian, Jimusaer sag, Junggar basin. According to the transformation of dynamic and static rock mechanics parameters and the correction of confining pressure, an improved method is proposed for quantitatively predicting the brittleness of rocks via well logs in tight oil reservoirs. First, 19 typical tight oil core samples are selected in the study area. Their static Young’s modulus, static Poisson’s ratio and petrophysical parameters are measured. In addition, the static brittleness indices of four other tight oil cores are measured under different confining pressure conditions. Second, the dynamic Young’s modulus, Poisson’s ratio and brittleness index are calculated using the compressional and shear wave velocity. With combination of the measured and calculated results, the transformation model of dynamic and static brittleness index is built based on the influence of porosity and clay content. The comparison of the predicted brittleness indices and measured results shows that the model has high accuracy. Third, on the basis of the experimental data under different confining pressure conditions, the amplifying factor of brittleness index is proposed to correct for the influence of confining pressure on the brittleness index. Finally, the above improved models are applied to formation evaluation via well logs. Compared with the results before correction, the results of the improved models agree better with the experimental data, which indicates that the improved models have better application effects. The brittleness index prediction method of tight oil reservoirs is improved in this research. It is of great importance in the optimization of

  20. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  1. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    International Nuclear Information System (INIS)

    Hao, Ming; Wang, Yanli; Bryant, Stephen H.

    2016-01-01

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  2. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    Science.gov (United States)

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-07

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Cohort-specific imputation of gene expression improves prediction of warfarin dose for African Americans.

    Science.gov (United States)

    Gottlieb, Assaf; Daneshjou, Roxana; DeGorter, Marianne; Bourgeois, Stephane; Svensson, Peter J; Wadelius, Mia; Deloukas, Panos; Montgomery, Stephen B; Altman, Russ B

    2017-11-24

    Genome-wide association studies are useful for discovering genotype-phenotype associations but are limited because they require large cohorts to identify a signal, which can be population-specific. Mapping genetic variation to genes improves power and allows the effects of both protein-coding variation as well as variation in expression to be combined into "gene level" effects. Previous work has shown that warfarin dose can be predicted using information from genetic variation that affects protein-coding regions. Here, we introduce a method that improves dose prediction by integrating tissue-specific gene expression. In particular, we use drug pathways and expression quantitative trait loci knowledge to impute gene expression-on the assumption that differential expression of key pathway genes may impact dose requirement. We focus on 116 genes from the pharmacokinetic and pharmacodynamic pathways of warfarin within training and validation sets comprising both European and African-descent individuals. We build gene-tissue signatures associated with warfarin dose in a cohort-specific manner and identify a signature of 11 gene-tissue pairs that significantly augments the International Warfarin Pharmacogenetics Consortium dosage-prediction algorithm in both populations. Our results demonstrate that imputed expression can improve dose prediction and bridge population-specific compositions. MATLAB code is available at https://github.com/assafgo/warfarin-cohort.

  4. Cohort-specific imputation of gene expression improves prediction of warfarin dose for African Americans

    Directory of Open Access Journals (Sweden)

    Assaf Gottlieb

    2017-11-01

    Full Text Available Abstract Background Genome-wide association studies are useful for discovering genotype–phenotype associations but are limited because they require large cohorts to identify a signal, which can be population-specific. Mapping genetic variation to genes improves power and allows the effects of both protein-coding variation as well as variation in expression to be combined into “gene level” effects. Methods Previous work has shown that warfarin dose can be predicted using information from genetic variation that affects protein-coding regions. Here, we introduce a method that improves dose prediction by integrating tissue-specific gene expression. In particular, we use drug pathways and expression quantitative trait loci knowledge to impute gene expression—on the assumption that differential expression of key pathway genes may impact dose requirement. We focus on 116 genes from the pharmacokinetic and pharmacodynamic pathways of warfarin within training and validation sets comprising both European and African-descent individuals. Results We build gene-tissue signatures associated with warfarin dose in a cohort-specific manner and identify a signature of 11 gene-tissue pairs that significantly augments the International Warfarin Pharmacogenetics Consortium dosage-prediction algorithm in both populations. Conclusions Our results demonstrate that imputed expression can improve dose prediction and bridge population-specific compositions. MATLAB code is available at https://github.com/assafgo/warfarin-cohort

  5. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    Science.gov (United States)

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  6. Improving the Accuracy of a Heliocentric Potential (HCP Prediction Model for the Aviation Radiation Dose

    Directory of Open Access Journals (Sweden)

    Junga Hwang

    2016-12-01

    Full Text Available The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs, flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA. However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015. In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1 real-time daily sunspot assessments, (2 predictions of the daily HCP by our prediction algorithm, and (3 calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  7. The use of patient factors to improve the prediction of operative duration using laparoscopic cholecystectomy.

    Science.gov (United States)

    Thiels, Cornelius A; Yu, Denny; Abdelrahman, Amro M; Habermann, Elizabeth B; Hallbeck, Susan; Pasupathy, Kalyan S; Bingener, Juliane

    2017-01-01

    Reliable prediction of operative duration is essential for improving patient and care team satisfaction, optimizing resource utilization and reducing cost. Current operative scheduling systems are unreliable and contribute to costly over- and underestimation of operative time. We hypothesized that the inclusion of patient-specific factors would improve the accuracy in predicting operative duration. We reviewed all elective laparoscopic cholecystectomies performed at a single institution between 01/2007 and 06/2013. Concurrent procedures were excluded. Univariate analysis evaluated the effect of age, gender, BMI, ASA, laboratory values, smoking, and comorbidities on operative duration. Multivariable linear regression models were constructed using the significant factors (p historical surgeon-specific and procedure-specific operative duration. External validation was done using the ACS-NSQIP database (n = 11,842). A total of 1801 laparoscopic cholecystectomy patients met inclusion criteria. Female sex was associated with reduced operative duration (-7.5 min, p < 0.001 vs. male sex) while increasing BMI (+5.1 min BMI 25-29.9, +6.9 min BMI 30-34.9, +10.4 min BMI 35-39.9, +17.0 min BMI 40 + , all p < 0.05 vs. normal BMI), increasing ASA (+7.4 min ASA III, +38.3 min ASA IV, all p < 0.01 vs. ASA I), and elevated liver function tests (+7.9 min, p < 0.01 vs. normal) were predictive of increased operative duration on univariate analysis. A model was then constructed using these predictive factors. The traditional surgical scheduling system was poorly predictive of actual operative duration (R 2  = 0.001) compared to the patient factors model (R 2  = 0.08). The model remained predictive on external validation (R 2  = 0.14).The addition of surgeon as a variable in the institutional model further improved predictive ability of the model (R 2  = 0.18). The use of routinely available pre-operative patient factors improves the prediction of operative

  8. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  9. Methods to improve genomic prediction and GWAS using combined Holstein populations

    DEFF Research Database (Denmark)

    Li, Xiujin

    The thesis focuses on methods to improve GWAS and genomic prediction using combined Holstein populations and investigations G by E interaction. The conclusions are: 1) Prediction reliabilities for Brazilian Holsteins can be increased by adding Nordic and Frensh genotyped bulls and a large G by E...... interaction exists between populations. 2) Combining data from Chinese and Danish Holstein populations increases the power of GWAS and detects new QTL regions for milk fatty acid traits. 3) The novel multi-trait Bayesian model efficiently estimates region-specific genomic variances, covariances...

  10. Improved prediction of reservoir behavior through integration of quantitative geological and petrophysical data

    Energy Technology Data Exchange (ETDEWEB)

    Auman, J. B.; Davies, D. K.; Vessell, R. K.

    1997-08-01

    Methodology that promises improved reservoir characterization and prediction of permeability, production and injection behavior during primary and enhanced recovery operations was demonstrated. The method is based on identifying intervals of unique pore geometry by a combination of image analysis techniques and traditional petrophysical measurements to calculate rock type and estimate permeability and saturation. Results from a complex carbonate and sandstone reservoir were presented as illustrative examples of the versatility and high level of accuracy of this method in predicting reservoir quality. 16 refs., 5 tabs., 14 figs.

  11. Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling

    Science.gov (United States)

    Ickes, Jacob C.

    Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form

  12. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.

    Science.gov (United States)

    Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M

    2015-09-01

    Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.

  13. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  14. A study on improvement of analytical prediction model for spacer grid pressure loss coefficients

    International Nuclear Information System (INIS)

    Lim, Jonh Seon

    2002-02-01

    Nuclear fuel assemblies used in the nuclear power plants consist of the nuclear fuel rods, the control rod guide tubes, an instrument guide tube, spacer grids,a bottom nozzle, a top nozzle. The spacer grid is the most important component of the fuel assembly components for thermal hydraulic and mechanical design and analyses. The spacer grids fixed with the guide tubes support the fuel rods and have the very important role to activate thermal energy transfer by the coolant mixing caused to the turbulent flow and crossflow in the subchannels. In this paper, the analytical spacer grid pressure loss prediction model has been studied and improved by considering the test section wall to spacer grid gap pressure loss independently and applying the appropriate friction drag coefficient to predict pressure loss more accurately at the low Reynolds number region. The improved analytical model has been verified based on the hydraulic pressure drop test results for the spacer grids of three types with 5x5, 16x16, 17x17 arrays, respectively. The pressure loss coefficients predicted by the improved analytical model are coincident with those test results within ±12%. This result shows that the improved analytical model can be used for research and design change of the nuclear fuel assembly

  15. Survival prediction algorithms miss significant opportunities for improvement if used for case selection in trauma quality improvement programs.

    Science.gov (United States)

    Heim, Catherine; Cole, Elaine; West, Anita; Tai, Nigel; Brohi, Karim

    2016-09-01

    Quality improvement (QI) programs have shown to reduce preventable mortality in trauma care. Detailed review of all trauma deaths is a time and resource consuming process and calculated probability of survival (Ps) has been proposed as audit filter. Review is limited on deaths that were 'expected to survive'. However no Ps-based algorithm has been validated and no study has examined elements of preventability associated with deaths classified as 'expected'. The objective of this study was to examine whether trauma performance review can be streamlined using existing mortality prediction tools without missing important areas for improvement. We conducted a retrospective study of all trauma deaths reviewed by our trauma QI program. Deaths were classified into non-preventable, possibly preventable, probably preventable or preventable. Opportunities for improvement (OPIs) involve failure in the process of care and were classified into clinical and system deviations from standards of care. TRISS and PS were used for calculation of probability of survival. Peer-review charts were reviewed by a single investigator. Over 8 years, 626 patients were included. One third showed elements of preventability and 4% were preventable. Preventability occurred across the entire range of the calculated Ps band. Limiting review to unexpected deaths would have missed over 50% of all preventability issues and a third of preventable deaths. 37% of patients showed opportunities for improvement (OPIs). Neither TRISS nor PS allowed for reliable identification of OPIs and limiting peer-review to patients with unexpected deaths would have missed close to 60% of all issues in care. TRISS and PS fail to identify a significant proportion of avoidable deaths and miss important opportunities for process and system improvement. Based on this, all trauma deaths should be subjected to expert panel review in order to aim at a maximal output of performance improvement programs. Copyright © 2016 Elsevier

  16. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  17. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  18. Managing uncertainty in metabolic network structure and improving predictions using EnsembleFBA.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    2017-03-01

    Full Text Available Genome-scale metabolic network reconstructions (GENREs are repositories of knowledge about the metabolic processes that occur in an organism. GENREs have been used to discover and interpret metabolic functions, and to engineer novel network structures. A major barrier preventing more widespread use of GENREs, particularly to study non-model organisms, is the extensive time required to produce a high-quality GENRE. Many automated approaches have been developed which reduce this time requirement, but automatically-reconstructed draft GENREs still require curation before useful predictions can be made. We present a novel approach to the analysis of GENREs which improves the predictive capabilities of draft GENREs by representing many alternative network structures, all equally consistent with available data, and generating predictions from this ensemble. This ensemble approach is compatible with many reconstruction methods. We refer to this new approach as Ensemble Flux Balance Analysis (EnsembleFBA. We validate EnsembleFBA by predicting growth and gene essentiality in the model organism Pseudomonas aeruginosa UCBPP-PA14. We demonstrate how EnsembleFBA can be included in a systems biology workflow by predicting essential genes in six Streptococcus species and mapping the essential genes to small molecule ligands from DrugBank. We found that some metabolic subsystems contributed disproportionately to the set of predicted essential reactions in a way that was unique to each Streptococcus species, leading to species-specific outcomes from small molecule interactions. Through our analyses of P. aeruginosa and six Streptococci, we show that ensembles increase the quality of predictions without drastically increasing reconstruction time, thus making GENRE approaches more practical for applications which require predictions for many non-model organisms. All of our functions and accompanying example code are available in an open online repository.

  19. Eddy-induced cross-shelf export of high Chl-a coastal waters in the SE Bay of Biscay

    KAUST Repository

    Rubio, Anna

    2017-12-08

    Different remote sensing data were combined to characterise a winter anticyclonic eddy in the southeastern Bay of Biscay and to infer its effects on cross-shelf exchanges, in a period when typical along shelf-slope currents depict a cyclonic pattern. While the joint analysis of available satellite data (infrared, visible and altimetry) permitted the characterisation and tracking of the anticyclone properties and path, data from a coastal high-frequency radar system enabled a quantitative analysis of the surface cross-shelf transports associated with this anticyclone. The warm core anticyclone had a diameter of around 50km, maximum azimuthal velocities near 50cms−1 and a relative vorticity of up to −0.45f. The eddy generation occurred after the relaxation of a cyclonic wind-driven current regime over the shelf-slope; then, the eddy remained stationary for several weeks until it started to drift northwards along the shelf break. The surface signature of this eddy was observed by means of high-frequency radar data for 20 consecutive days, providing a unique opportunity to characterise and quantify, from a Lagrangian perspective, the associated transport and its effect on the Chl-a surface distribution. We observed the presence of mesoscale structures with similar characteristics in the area during different winters within the period 2011–2014. Our results suggest that the eddy-induced recurrent cross-shelf export is an effective mechanism for the expansion of coastal productive waters into the adjacent oligotrophic ocean basin.

  20. Activity Prediction of Schiff Base Compounds using Improved QSAR Models of Cinnamaldehyde Analogues and Derivatives

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2015-10-01

    Full Text Available In past work, QSAR (quantitative structure-activity relationship models of cinnamaldehyde analogues and derivatives (CADs have been used to predict the activities of new chemicals based on their mass concentrations, but these approaches are not without shortcomings. Therefore, molar concentrations were used instead of mass concentrations to determine antifungal activity. New QSAR models of CADs against Aspergillus niger and Penicillium citrinum were established, and the molecular design of new CADs was performed. The antifungal properties of the designed CADs were tested, and the experimental Log AR values were in agreement with the predicted Log AR values. The results indicate that the improved QSAR models are more reliable and can be effectively used for CADs molecular design and prediction of the activity of CADs. These findings provide new insight into the development and utilization of cinnamaldehyde compounds.

  1. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  2. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    Science.gov (United States)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  3. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  4. At the Nexus of History, Ecology, and Hydrobiogeochemistry: Improved Predictions across Scales through Integration.

    Science.gov (United States)

    Stegen, James C

    2018-01-01

    To improve predictions of ecosystem function in future environments, we need to integrate the ecological and environmental histories experienced by microbial communities with hydrobiogeochemistry across scales. A key issue is whether we can derive generalizable scaling relationships that describe this multiscale integration. There is a strong foundation for addressing these challenges. We have the ability to infer ecological history with null models and reveal impacts of environmental history through laboratory and field experimentation. Recent developments also provide opportunities to inform ecosystem models with targeted omics data. A major next step is coupling knowledge derived from such studies with multiscale modeling frameworks that are predictive under non-steady-state conditions. This is particularly true for systems spanning dynamic interfaces, which are often hot spots of hydrobiogeochemical function. We can advance predictive capabilities through a holistic perspective focused on the nexus of history, ecology, and hydrobiogeochemistry.

  5. Machine-learning scoring functions to improve structure-based binding affinity prediction and virtual screening.

    Science.gov (United States)

    Ain, Qurrat Ul; Aleksandrova, Antoniya; Roessler, Florian D; Ballester, Pedro J

    2015-01-01

    Docking tools to predict whether and how a small molecule binds to a target can be applied if a structural model of such target is available. The reliability of docking depends, however, on the accuracy of the adopted scoring function (SF). Despite intense research over the years, improving the accuracy of SFs for structure-based binding affinity prediction or virtual screening has proven to be a challenging task for any class of method. New SFs based on modern machine-learning regression models, which do not impose a predetermined functional form and thus are able to exploit effectively much larger amounts of experimental data, have recently been introduced. These machine-learning SFs have been shown to outperform a wide range of classical SFs at both binding affinity prediction and virtual screening. The emerging picture from these studies is that the classical approach of using linear regression with a small number of expert-selected structural features can be strongly improved by a machine-learning approach based on nonlinear regression allied with comprehensive data-driven feature selection. Furthermore, the performance of classical SFs does not grow with larger training datasets and hence this performance gap is expected to widen as more training data becomes available in the future. Other topics covered in this review include predicting the reliability of a SF on a particular target class, generating synthetic data to improve predictive performance and modeling guidelines for SF development. WIREs Comput Mol Sci 2015, 5:405-424. doi: 10.1002/wcms.1225 For further resources related to this article, please visit the WIREs website.

  6. Improved Seasonal Prediction of European Summer Temperatures With New Five-Layer Soil-Hydrology Scheme

    Science.gov (United States)

    Bunzel, Felix; Müller, Wolfgang A.; Dobrynin, Mikhail; Fröhlich, Kristina; Hagemann, Stefan; Pohlmann, Holger; Stacke, Tobias; Baehr, Johanna

    2018-01-01

    We evaluate the impact of a new five-layer soil-hydrology scheme on seasonal hindcast skill of 2 m temperatures over Europe obtained with the Max Planck Institute Earth System Model (MPI-ESM). Assimilation experiments from 1981 to 2010 and 10-member seasonal hindcasts initialized on 1 May each year are performed with MPI-ESM in two soil configurations, one using a bucket scheme and one a new five-layer soil-hydrology scheme. We find the seasonal hindcast skill for European summer temperatures to improve with the five-layer scheme compared to the bucket scheme and investigate possible causes for these improvements. First, improved indirect soil moisture assimilation allows for enhanced soil moisture-temperature feedbacks in the hindcasts. Additionally, this leads to improved prediction of anomalies in the 500 hPa geopotential height surface, reflecting more realistic atmospheric circulation patterns over Europe.

  7. Breast calcifications. A standardized mammographic reporting and data system to improve positive predictive value

    International Nuclear Information System (INIS)

    Perugini, G.; Bonzanini, B.; Valentino, C.

    1999-01-01

    The purpose of this work is to investigate the usefulness of a standardized reporting and data system in improving the positive predictive value of mammography in breast calcifications. Using the Breast Imaging Reporting and Data System lexicon developed by the American College of Radiology, it is defined 5 descriptive categories of breast calcifications and classified diagnostic suspicion of malignancy on a 3-grade scale (low, intermediate and high). Two radiologists reviewed 117 mammographic studies selected from those of the patients submitted to surgical biopsy for mammographically detected calcifications from January 1993 to December 1997, and classified them according to the above criteria. The positive predictive value was calculated for all examinations and for the stratified groups. Defining a standardized system for assessing and describing breast calcifications helps improve the diagnostic accuracy of mammography in clinical practice [it

  8. An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2015-01-01

    Full Text Available To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

  9. A Model Predictive Control Approach for Fuel Economy Improvement of a Series Hydraulic Hybrid Vehicle

    Directory of Open Access Journals (Sweden)

    Tri-Vien Vu

    2014-10-01

    Full Text Available This study applied a model predictive control (MPC framework to solve the cruising control problem of a series hydraulic hybrid vehicle (SHHV. The controller not only regulates vehicle velocity, but also engine torque, engine speed, and accumulator pressure to their corresponding reference values. At each time step, a quadratic programming problem is solved within a predictive horizon to obtain the optimal control inputs. The objective is to minimize the output error. This approach ensures that the components operate at high efficiency thereby improving the total efficiency of the system. The proposed SHHV control system was evaluated under urban and highway driving conditions. By handling constraints and input-output interactions, the MPC-based control system ensures that the system operates safely and efficiently. The fuel economy of the proposed control scheme shows a noticeable improvement in comparison with the PID-based system, in which three Proportional-Integral-Derivative (PID controllers are used for cruising control.

  10. Biomarkers improve mortality prediction by prognostic scales in community-acquired pneumonia.

    Science.gov (United States)

    Menéndez, R; Martínez, R; Reyes, S; Mensa, J; Filella, X; Marcos, M A; Martínez, A; Esquinas, C; Ramirez, P; Torres, A

    2009-07-01

    Prognostic scales provide a useful tool to predict mortality in community-acquired pneumonia (CAP). However, the inflammatory response of the host, crucial in resolution and outcome, is not included in the prognostic scales. The aim of this study was to investigate whether information about the initial inflammatory cytokine profile and markers increases the accuracy of prognostic scales to predict 30-day mortality. To this aim, a prospective cohort study in two tertiary care hospitals was designed. Procalcitonin (PCT), C-reactive protein (CRP) and the systemic cytokines tumour necrosis factor alpha (TNFalpha) and interleukins IL6, IL8 and IL10 were measured at admission. Initial severity was assessed by PSI (Pneumonia Severity Index), CURB65 (Confusion, Urea nitrogen, Respiratory rate, Blood pressure, > or = 65 years of age) and CRB65 (Confusion, Respiratory rate, Blood pressure, > or = 65 years of age) scales. A total of 453 hospitalised CAP patients were included. The 36 patients who died (7.8%) had significantly increased levels of IL6, IL8, PCT and CRP. In regression logistic analyses, high levels of CRP and IL6 showed an independent predictive value for predicting 30-day mortality, after adjustment for prognostic scales. Adding CRP to PSI significantly increased the area under the receiver operating characteristic curve (AUC) from 0.80 to 0.85, that of CURB65 from 0.82 to 0.85 and that of CRB65 from 0.79 to 0.85. Adding IL6 or PCT values to CRP did not significantly increase the AUC of any scale. When using two scales (PSI and CURB65/CRB65) and CRP simultaneously the AUC was 0.88. Adding CRP levels to PSI, CURB65 and CRB65 scales improves the 30-day mortality prediction. The highest predictive value is reached with a combination of two scales and CRP. Further validation of that improvement is needed.

  11. Towards improved hydrologic predictions using data assimilation techniques for water resource management at the continental scale

    Science.gov (United States)

    Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan

    2017-04-01

    More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.

  12. Improving behavioral performance under full attention by adjusting response criteria to changes in stimulus predictability.

    Science.gov (United States)

    Katzner, Steffen; Treue, Stefan; Busse, Laura

    2012-09-04

    One of the key features of active perception is the ability to predict critical sensory events. Humans and animals can implicitly learn statistical regularities in the timing of events and use them to improve behavioral performance. Here, we used a signal detection approach to investigate whether such improvements in performance result from changes of perceptual sensitivity or rather from adjustments of a response criterion. In a regular sequence of briefly presented stimuli, human observers performed a noise-limited motion detection task by monitoring the stimulus stream for the appearance of a designated target direction. We manipulated target predictability through the hazard rate, which specifies the likelihood that a target is about to occur, given it has not occurred so far. Analyses of response accuracy revealed that improvements in performance could be accounted for by adjustments of the response criterion; a growing hazard rate was paralleled by an increasing tendency to report the presence of a target. In contrast, the hazard rate did not affect perceptual sensitivity. Consistent with previous research, we also found that reaction time decreases as the hazard rate grows. A simple rise-to-threshold model could well describe this decrease and attribute predictability effects to threshold adjustments rather than changes in information supply. We conclude that, even under conditions of full attention and constant perceptual sensitivity, behavioral performance can be optimized by dynamically adjusting the response criterion to meet ongoing changes in the likelihood of a target.

  13. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    Science.gov (United States)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  14. RAPID COMMUNICATION: Improving prediction accuracy of GPS satellite clocks with periodic variation behaviour

    Science.gov (United States)

    Heo, Youn Jeong; Cho, Jeongho; Heo, Moon Beom

    2010-07-01

    The broadcast ephemeris and IGS ultra-rapid predicted (IGU-P) products are primarily available for use in real-time GPS applications. The IGU orbit precision has been remarkably improved since late 2007, but its clock products have not shown acceptably high-quality prediction performance. One reason for this fact is that satellite atomic clocks in space can be easily influenced by various factors such as temperature and environment and this leads to complicated aspects like periodic variations, which are not sufficiently described by conventional models. A more reliable prediction model is thus proposed in this paper in order to be utilized particularly in describing the periodic variation behaviour satisfactorily. The proposed prediction model for satellite clocks adds cyclic terms to overcome the periodic effects and adopts delay coordinate embedding, which offers the possibility of accessing linear or nonlinear coupling characteristics like satellite behaviour. The simulation results have shown that the proposed prediction model outperforms the IGU-P solutions at least on a daily basis.

  15. Improved Prediction of Preterm Delivery Using Empirical Mode Decomposition Analysis of Uterine Electromyography Signals.

    Directory of Open Access Journals (Sweden)

    Peng Ren

    Full Text Available Preterm delivery increases the risk of infant mortality and morbidity, and therefore developing reliable methods for predicting its likelihood are of great importance. Previous work using uterine electromyography (EMG recordings has shown that they may provide a promising and objective way for predicting risk of preterm delivery. However, to date attempts at utilizing computational approaches to achieve sufficient predictive confidence, in terms of area under the curve (AUC values, have not achieved the high discrimination accuracy that a clinical application requires. In our study, we propose a new analytical approach for assessing the risk of preterm delivery using EMG recordings which firstly employs Empirical Mode Decomposition (EMD to obtain their Intrinsic Mode Functions (IMF. Next, the entropy values of both instantaneous amplitude and instantaneous frequency of the first ten IMF components are computed in order to derive ratios of these two distinct components as features. Discrimination accuracy of this approach compared to those proposed previously was then calculated using six differently representative classifiers. Finally, three different electrode positions were analyzed for their prediction accuracy of preterm delivery in order to establish which uterine EMG recording location was optimal signal data. Overall, our results show a clear improvement in prediction accuracy of preterm delivery risk compared with previous approaches, achieving an impressive maximum AUC value of 0.986 when using signals from an electrode positioned below the navel. In sum, this provides a promising new method for analyzing uterine EMG signals to permit accurate clinical assessment of preterm delivery risk.

  16. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  17. Factors predicting visual improvement post pars plana vitrectomy for proliferative diabetic retinopathy

    Directory of Open Access Journals (Sweden)

    Evelyn Tai Li Min

    2017-08-01

    Full Text Available AIM: To identify factors predicting visual improvement post vitrectomy for sequelae of proliferative diabetic retinopathy(PDR.METHODS: This was a retrospective analysis of pars plana vitrectomy indicated for sequelae of PDR from Jan. to Dec. 2014 in Hospital Sultanah Bahiyah, Alor Star, Kedah, Malaysia. Data collected included patient demographics, baseline visual acuity(VAand post-operative logMAR best corrected VA at 1y. Data analysis was performed with IBM SPSS Statistics Version 22.0. RESULTS: A total of 103 patients were included. The mean age was 51.2y. On multivariable analysis, each pre-operative positive deviation of 1 logMAR from a baseline VA of 0 logMAR was associated with a post-operative improvement of 0.859 logMAR(P0.001. Likewise, an attached macula pre-operatively was associated with a 0.374(P=0.003logMAR improvement post vitrectomy. Absence of iris neovascularisation and absence of post-operative complications were associated with a post vitrectomy improvement in logMAR by 1.126(P=0.001and 0.377(P=0.005respectively. Absence of long-acting intraocular tamponade was associated with a 0.302(P=0.010improvement of logMAR post vitrectomy.CONCLUSION: Factors associated with visual improvement after vitrectomy are poor pre-operative VA, an attached macula, absence of iris neovascularisation, absence of post-operative complications and abstaining from use of long-acting intraocular tamponade. A thorough understanding of the factors predicting visual improvement will facilitate decision-making in vitreoretinal surgery.

  18. Catchment coevolution: A useful framework for improving predictions of hydrological change?

    Science.gov (United States)

    Troch, Peter A.

    2017-04-01

    The notion that landscape features have co-evolved over time is well known in the Earth sciences. Hydrologists have recently called for a more rigorous connection between emerging spatial patterns of landscape features and the hydrological response of catchments, and have termed this concept catchment coevolution. In this presentation we present a general framework of catchment coevolution that could improve predictions of hydrologic change. We first present empirical evidence of the interaction and feedback of landscape evolution and changes in hydrological response. From this review it is clear that the independent drivers of catchment coevolution are climate, geology, and tectonics. We identify common currency that allows comparing the levels of activity of these independent drivers, such that, at least conceptually, we can quantify the rate of evolution or aging. Knowing the hydrologic age of a catchment by itself is not very meaningful without linking age to hydrologic response. Two avenues of investigation have been used to understand the relationship between (differences in) age and hydrological response: (i) one that is based on relating present landscape features to runoff processes that are hypothesized to be responsible for the current fingerprints in the landscape; and (ii) one that takes advantage of an experimental design known as space-for-time substitution. Both methods have yielded significant insights in the hydrologic response of landscapes with different histories. If we want to make accurate predictions of hydrologic change, we will also need to be able to predict how the catchment will further coevolve in association with changes in the activity levels of the drivers (e.g., climate). There is ample evidence in the literature that suggests that whole-system prediction of catchment coevolution is, at least in principle, plausible. With this imperative we outline a research agenda that implements the concepts of catchment coevolution for building

  19. Improved feature selection based on genetic algorithms for real time disruption prediction on JET

    International Nuclear Information System (INIS)

    Rattá, G.A.; Vega, J.; Murari, A.

    2012-01-01

    Highlights: ► A new signal selection methodology to improve disruption prediction is reported. ► The approach is based on Genetic Algorithms. ► An advanced predictor has been created with the new set of signals. ► The new system obtains considerably higher prediction rates. - Abstract: The early prediction of disruptions is an important aspect of the research in the field of Tokamak control. A very recent predictor, called “Advanced Predictor Of Disruptions” (APODIS), developed for the “Joint European Torus” (JET), implements the real time recognition of incoming disruptions with the best success rate achieved ever and an outstanding stability for long periods following training. In this article, a new methodology to select the set of the signals’ parameters in order to maximize the performance of the predictor is reported. The approach is based on “Genetic Algorithms” (GAs). With the feature selection derived from GAs, a new version of APODIS has been developed. The results are significantly better than the previous version not only in terms of success rates but also in extending the interval before the disruption in which reliable predictions are achieved. Correct disruption predictions with a success rate in excess of 90% have been achieved 200 ms before the time of the disruption. The predictor response is compared with that of JET's Protection System (JPS) and the ADODIS predictor is shown to be far superior. Both systems have been carefully tested with a wide number of discharges to understand their relative merits and the most profitable directions of further improvements.

  20. Improved feature selection based on genetic algorithms for real time disruption prediction on JET

    Energy Technology Data Exchange (ETDEWEB)

    Ratta, G.A., E-mail: garatta@gateme.unsj.edu.ar [GATEME, Facultad de Ingenieria, Universidad Nacional de San Juan, Avda. San Martin 1109 (O), 5400 San Juan (Argentina); JET EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense, 40, 28040 Madrid (Spain); JET EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Murari, A. [Associazione EURATOM-ENEA per la Fusione, Consorzio RFX, 4-35127 Padova (Italy); JET EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer A new signal selection methodology to improve disruption prediction is reported. Black-Right-Pointing-Pointer The approach is based on Genetic Algorithms. Black-Right-Pointing-Pointer An advanced predictor has been created with the new set of signals. Black-Right-Pointing-Pointer The new system obtains considerably higher prediction rates. - Abstract: The early prediction of disruptions is an important aspect of the research in the field of Tokamak control. A very recent predictor, called 'Advanced Predictor Of Disruptions' (APODIS), developed for the 'Joint European Torus' (JET), implements the real time recognition of incoming disruptions with the best success rate achieved ever and an outstanding stability for long periods following training. In this article, a new methodology to select the set of the signals' parameters in order to maximize the performance of the predictor is reported. The approach is based on 'Genetic Algorithms' (GAs). With the feature selection derived from GAs, a new version of APODIS has been developed. The results are significantly better than the previous version not only in terms of success rates but also in extending the interval before the disruption in which reliable predictions are achieved. Correct disruption predictions with a success rate in excess of 90% have been achieved 200 ms before the time of the disruption. The predictor response is compared with that of JET's Protection System (JPS) and the ADODIS predictor is shown to be far superior. Both systems have been carefully tested with a wide number of discharges to understand their relative merits and the most profitable directions of further improvements.

  1. Prediction and moderation of improvement in cognitive-behavioral and psychodynamic psychotherapy for panic disorder.

    Science.gov (United States)

    Chambless, Dianne L; Milrod, Barbara; Porter, Eliora; Gallop, Robert; McCarthy, Kevin S; Graf, Elizabeth; Rudden, Marie; Sharpless, Brian A; Barber, Jacques P

    2017-08-01

    To identify variables predicting psychotherapy outcome for panic disorder or indicating which of 2 very different forms of psychotherapy-panic-focused psychodynamic psychotherapy (PFPP) or cognitive-behavioral therapy (CBT)-would be more effective for particular patients. Data were from 161 adults participating in a randomized controlled trial (RCT) including these psychotherapies. Patients included 104 women; 118 patients were White, 33 were Black, and 10 were of other races; 24 were Latino(a). Predictors/moderators measured at baseline or by Session 2 of treatment were used to predict change on the Panic Disorder Severity Scale (PDSS). Higher expectancy for treatment gains (Credibility/Expectancy Questionnaire d = -1.05, CI 95% [-1.50, -0.60]), and later age of onset (d = -0.65, CI 95% [-0.98, -0.32]) were predictive of greater change. Both variables were also significant moderators: patients with low expectancy of improvement improved significantly less in PFPP than their counterparts in CBT, whereas this was not the case for patients with average or high levels of expectancy. When patients had an onset of panic disorder later in life (≥27.5 years old), they fared as well in PFPP as CBT. In contrast, at low and mean levels of onset age, CBT was the more effective treatment. Predictive variables suggest possibly fruitful foci for improvement of treatment outcome. In terms of moderation, CBT was the more consistently effective treatment, but moderators identified some patients who would do as well in PFPP as in CBT, thereby widening empirically supported options for treatment of this disorder. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Using AIRS retrievals in the WRF-LETKF system to improve regional numerical weather prediction

    Directory of Open Access Journals (Sweden)

    Takemasa Miyoshi

    2012-09-01

    Full Text Available In addition to conventional observations, atmospheric temperature and humidity profile data from the Atmospheric Infrared Sounder (AIRS Version 5 retrieval products are assimilated into the Weather Research and Forecasting (WRF model, using the local ensemble transform Kalman filter (LETKF. Although a naive assimilation of all available quality-controlled AIRS retrieval data yields an inferior analysis, the additional enhancements of adaptive inflation and horizontal data thinning result in a general improvement of numerical weather prediction skill due to AIRS data. In particular, the adaptive inflation method is enhanced so that it no longer assumes temporal homogeneity of the observing network and allows for a better treatment of the temporally inhomogeneous AIRS data. Results indicate that the improvements due to AIRS data are more significant in longer-lead forecasts. Forecasts of Typhoons Sinlaku and Jangmi in September 2008 show improvements due to AIRS data.

  3. Improvement of PM10 prediction in East Asia using inverse modeling

    Science.gov (United States)

    Koo, Youn-Seo; Choi, Dae-Ryun; Kwon, Hi-Yong; Jang, Young-Kee; Han, Jin-Seok

    2015-04-01

    Aerosols from anthropogenic emissions in industrialized region in China as well as dust emissions from southern Mongolia and northern China that transport along prevailing northwestern wind have a large influence on the air quality in Korea. The emission inventory in the East Asia region is an important factor in chemical transport modeling (CTM) for PM10 (particulate matters less than 10 ㎛ in aerodynamic diameter) forecasts and air quality management in Korea. Most previous studies showed that predictions of PM10 mass concentration by the CTM were underestimated when comparing with observational data. In order to fill the gap in discrepancies between observations and CTM predictions, the inverse Bayesian approach with Comprehensive Air-quality Model with extension (CAMx) forward model was applied to obtain optimized a posteriori PM10 emissions in East Asia. The predicted PM10 concentrations with a priori emission were first compared with observations at monitoring sites in China and Korea for January and August 2008. The comparison showed that PM10 concentrations with a priori PM10 emissions for anthropogenic and dust sources were generally under-predicted. The result from the inverse modeling indicated that anthropogenic PM10 emissions in the industrialized and urbanized areas in China were underestimated while dust emissions from desert and barren soil in southern Mongolia and northern China were overestimated. A priori PM10 emissions from northeastern China regions including Shenyang, Changchun, and Harbin were underestimated by about 300% (i.e., the ratio of a posteriori to a priori PM10 emission was a factor of about 3). The predictions of PM10 concentrations with a posteriori emission showed better agreement with the observations, implying that the inverse modeling minimized the discrepancies in the model predictions by improving PM10 emissions in East Asia.

  4. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  5. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  6. Improving MJO Prediction and Simulation Using AGCM Coupled Ocean Model with Refined Vertical Resolution

    Science.gov (United States)

    Tu, Chia-Ying; Tseng, Wan-Ling; Kuo, Pei-Hsuan; Lan, Yung-Yao; Tsuang, Ben-Jei; Hsu, Huang-Hsiung

    2017-04-01

    Precipitation in Taiwan area is significantly influenced by MJO (Madden-Julian Oscillation) in the boreal winter. This study is therefore conducted by toggling the MJO prediction and simulation with a unique model structure. The one-dimensional TKE (Turbulence Kinetic Energy) type ocean model SIT (Snow, Ice, Thermocline) with refined vertical resolution near surface is able to resolve cool skin, as well as diurnal warm layer. SIT can simulate accurate SST and hence give precise air-sea interaction. By coupling SIT with ECHAM5 (MPI-Meteorology), CAM5 (NCAR) and HiRAM (GFDL), the MJO simulations in 20-yrs climate integrations conducted by three SIT-coupled AGCMs are significant improved comparing to those driven by prescribed SST. The horizontal resolutions in ECHAM5, CAM5 and HiRAM are 2-deg., 1-deg and 0.5-deg., respectively. This suggests that the improvement of MJO simulation by coupling SIT is AGCM-resolution independent. This study further utilizes HiRAM coupled SIT to evaluate its MJO forecast skill. HiRAM has been recognized as one of the best model for seasonal forecasts of hurricane/typhoon activity (Zhao et al., 2009; Chen & Lin, 2011; 2013), but was not as successful in MJO forecast. The preliminary result of the HiRAM-SIT experiment during DYNAMO period shows improved success in MJO forecast. These improvements of MJO prediction and simulation in both hindcast experiments and climate integrations are mainly from better-simulated SST diurnal cycle and diurnal amplitude, which is contributed by the refined vertical resolution near ocean surface in SIT. Keywords: MJO Predictability, DYNAMO

  7. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  8. Applying a health action model to predict and improve healthy behaviors in coal miners.

    Science.gov (United States)

    Vahedian-Shahroodi, Mohammad; Tehrani, Hadi; Mohammadi, Faeze; Gholian-Aval, Mahdi; Peyman, Nooshin

    2018-05-01

    One of the most important ways to prevent work-related diseases in occupations such as mining is to promote healthy behaviors among miners. This study aimed to predict and promote healthy behaviors among coal miners by using a health action model (HAM). The study was conducted on 200 coal miners in Iran in two steps. In the first step, a descriptive study was implemented to determine predictive constructs and effectiveness of HAM on behavioral intention. The second step involved a quasi-experimental study to determine the effect of an HAM-based education intervention. This intervention was implemented by the researcher and the head of the safety unit based on the predictive construct specified in the first step over 12 sessions of 60 min. The data was collected using an HAM questionnaire and a checklist of healthy behavior. The results of the first step of the study showed that attitude, belief, and normative constructs were meaningful predictors of behavioral intention. Also, the results of the second step revealed that the mean score of attitude and behavioral intention increased significantly after conducting the intervention in the experimental group, while the mean score of these constructs decreased significantly in the control group. The findings of this study showed that HAM-based educational intervention could improve the healthy behaviors of mine workers. Therefore, it is recommended to extend the application of this model to other working groups to improve healthy behaviors.

  9. Partition dataset according to amino acid type improves the prediction of deleterious non-synonymous SNPs

    International Nuclear Information System (INIS)

    Yang, Jing; Li, Yuan-Yuan; Li, Yi-Xue; Ye, Zhi-Qiang

    2012-01-01

    Highlights: ► Proper dataset partition can improve the prediction of deleterious nsSNPs. ► Partition according to original residue type at nsSNP is a good criterion. ► Similar strategy is supposed promising in other machine learning problems. -- Abstract: Many non-synonymous SNPs (nsSNPs) are associated with diseases, and numerous machine learning methods have been applied to train classifiers for sorting disease-associated nsSNPs from neutral ones. The continuously accumulated nsSNP data allows us to further explore better prediction approaches. In this work, we partitioned the training data into 20 subsets according to either original or substituted amino acid type at the nsSNP site. Using support vector machine (SVM), training classification models on each subset resulted in an overall accuracy of 76.3% or 74.9% depending on the two different partition criteria, while training on the whole dataset obtained an accuracy of only 72.6%. Moreover, the dataset was also randomly divided into 20 subsets, but the corresponding accuracy was only 73.2%. Our results demonstrated that partitioning the whole training dataset into subsets properly, i.e., according to the residue type at the nsSNP site, will improve the performance of the trained classifiers significantly, which should be valuable in developing better tools for predicting the disease-association of nsSNPs.

  10. Incorporating Scale-Dependent Fracture Stiffness for Improved Reservoir Performance Prediction

    Science.gov (United States)

    Crawford, B. R.; Tsenn, M. C.; Homburg, J. M.; Stehle, R. C.; Freysteinson, J. A.; Reese, W. C.

    2017-12-01

    We present a novel technique for predicting dynamic fracture network response to production-driven changes in effective stress, with the potential for optimizing depletion planning and improving recovery prediction in stress-sensitive naturally fractured reservoirs. A key component of the method involves laboratory geomechanics testing of single fractures in order to develop a unique scaling relationship between fracture normal stiffness and initial mechanical aperture. Details of the workflow are as follows: tensile, opening mode fractures are created in a variety of low matrix permeability rocks with initial, unstressed apertures in the micrometer to millimeter range, as determined from image analyses of X-ray CT scans; subsequent hydrostatic compression of these fractured samples with synchronous radial strain and flow measurement indicates that both mechanical and hydraulic aperture reduction varies linearly with the natural logarithm of effective normal stress; these stress-sensitive single-fracture laboratory observations are then upscaled to networks with fracture populations displaying frequency-length and length-aperture scaling laws commonly exhibited by natural fracture arrays; functional relationships between reservoir pressure reduction and fracture network porosity, compressibility and directional permeabilities as generated by such discrete fracture network modeling are then exported to the reservoir simulator for improved naturally fractured reservoir performance prediction.

  11. Improving rational thermal comfort prediction by using subpopulation characteristics: A case study at Hermitage Amsterdam.

    Science.gov (United States)

    Kramer, Rick; Schellen, Lisje; Schellen, Henk; Kingma, Boris

    2017-01-01

    This study aims to improve the prediction accuracy of the rational standard thermal comfort model, known as the Predicted Mean Vote (PMV) model, by (1) calibrating one of its input variables "metabolic rate," and (2) extending it by explicitly incorporating the variable running mean outdoor temperature (RMOT) that relates to adaptive thermal comfort. The analysis was performed with survey data ( n = 1121) and climate measurements of the indoor and outdoor environment from a one year-long case study undertaken at Hermitage Amsterdam museum in the Netherlands. The PMVs were calculated for 35 survey days using (1) an a priori assumed metabolic rate, (2) a calibrated metabolic rate found by fitting the PMVs to the thermal sensation votes (TSVs) of each respondent using an optimization routine, and (3) extending the PMV model by including the RMOT. The results show that the calibrated metabolic rate is estimated to be 1.5 Met for this case study that was predominantly visited by elderly females. However, significant differences in metabolic rates have been revealed between adults and elderly showing the importance of differentiating between subpopulations. Hence, the standard tabular values, which only differentiate between various activities, may be oversimplified for many cases. Moreover, extending the PMV model with the RMOT substantially improves the thermal sensation prediction, but thermal sensation toward extreme cool and warm sensations remains partly underestimated.

  12. Improved prediction for the mass of the W boson in the NMSSM

    International Nuclear Information System (INIS)

    Staal, O.; Zeune, L.

    2015-10-01

    Electroweak precision observables, being highly sensitive to loop contributions of new physics, provide a powerful tool to test the theory and to discriminate between different models of the underlying physics. In that context, the W boson mass, M W , plays a crucial role. The accuracy of the M W measurement has been significantly improved over the last years, and further improvement of the experimental accuracy is expected from future LHC measurements. In order to fully exploit the precise experimental determination, an accurate theoretical prediction for M W in the Standard Model (SM) and extensions of it is of central importance. We present the currently most accurate prediction for the W boson mass in the Next-to-Minimal Supersymmetric extension of the Standard Model (NMSSM), including the full one-loop result and all available higher-order corrections of SM and SUSY type. The evaluation of M W is performed in a flexible framework, which facilitates the extension to other models beyond the SM. We show numerical results for the W boson mass in the NMSSM, focussing on phenomenologically interesting scenarios, in which the Higgs signal can be interpreted as the lightest or second lightest CP-even Higgs boson of the NMSSM. We find that, for both Higgs signal interpretations, the NMSSM M W prediction is well compatible with the measurement. We study the SUSY contributions to M W in detail and investigate in particular the genuine NMSSM effects from the Higgs and neutralino sectors.

  13. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.

    Science.gov (United States)

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M

    2017-05-01

    Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  14. Improved Performance and Safety for High Energy Batteries Through Use of Hazard Anticipation and Capacity Prediction

    Science.gov (United States)

    Atwater, Terrill

    1993-01-01

    Prediction of the capacity remaining in used high rate, high energy batteries is important information to the user. Knowledge of the capacity remaining in used batteries results in better utilization. This translates into improved readiness and cost savings due to complete, efficient use. High rate batteries, due to their chemical nature, are highly sensitive to misuse (i.e., over discharge or very high rate discharge). Battery failure due to misuse or manufacturing defects could be disastrous. Since high rate, high energy batteries are expensive and energetic, a reliable method of predicting both failures and remaining energy has been actively sought. Due to concerns over safety, the behavior of lithium/sulphur dioxide cells at different temperatures and current drains was examined. The main thrust of this effort was to determine failure conditions for incorporation in hazard anticipation circuitry. In addition, capacity prediction formulas have been developed from test data. A process that performs continuous, real-time hazard anticipation and capacity prediction was developed. The introduction of this process into microchip technology will enable the production of reliable, safe, and efficient high energy batteries.

  15. Low-Quality Structural and Interaction Data Improves Binding Affinity Prediction via Random Forest.

    Science.gov (United States)

    Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J

    2015-06-12

    Docking scoring functions can be used to predict the strength of protein-ligand binding. It is widely believed that training a scoring function with low-quality data is detrimental for its predictive performance. Nevertheless, there is a surprising lack of systematic validation experiments in support of this hypothesis. In this study, we investigated to which extent training a scoring function with data containing low-quality structural and binding data is detrimental for predictive performance. We actually found that low-quality data is not only non-detrimental, but beneficial for the predictive performance of machine-learning scoring functions, though the improvement is less important than that coming from high-quality data. Furthermore, we observed that classical scoring functions are not able to effectively exploit data beyond an early threshold, regardless of its quality. This demonstrates that exploiting a larger data volume is more important for the performance of machine-learning scoring functions than restricting to a smaller set of higher data quality.

  16. Improved Fuzzy Modelling to Predict the Academic Performance of Distance Education Students

    Directory of Open Access Journals (Sweden)

    Osman Yildiz

    2013-12-01

    Full Text Available It is essential to predict distance education students’ year-end academic performance early during the course of the semester and to take precautions using such prediction-based information. This will, in particular, help enhance their academic performance and, therefore, improve the overall educational quality. The present study was on the development of a mathematical model intended to predict distance education students’ year-end academic performance using the first eight-week data on the learning management system. First, two fuzzy models were constructed, namely the classical fuzzy model and the expert fuzzy model, the latter being based on expert opinion. Afterwards, a gene-fuzzy model was developed optimizing membership functions through genetic algorithm. The data on distance education were collected through Moodle, an open source learning management system. The data were on a total of 218 students who enrolled in Basic Computer Sciences in 2012. The input data consisted of the following variables: When a student logged on to the system for the last time after the content of a lesson was uploaded, how often he/she logged on to the system, how long he/she stayed online in the last login, what score he/she got in the quiz taken in Week 4, and what score he/she got in the midterm exam taken in Week 8. A comparison was made among the predictions of the three models concerning the students’ year-end academic performance.

  17. Modeled long-term changes of DIN:DIP ratio in the Changjiang River in relation to Chl-α and DO concentrations in adjacent estuary

    Science.gov (United States)

    Wang, Jianing; Yan, Weijin; Chen, Nengwang; Li, Xinyan; Liu, Lusan

    2015-12-01

    In the past four decades (1970-2013), nitrogen and phosphorous inputs to the Changjiang River basin, mainly from human activities, have increased 3-fold and 306-fold, respectively. The riverine nutrient fluxes to the estuary have also grown exponentially. Dissolved inorganic nitrogen (DIN) and dissolved inorganic phosphorous (DIP) fluxes of the Changjiang River increased by 338% and 574% during 1970-2013 period, and red tides and benthic hypoxia have been observed in the outflow region of the Changjiang River in the East China Sea (ECS). We assumed that time series changes in the DIN:DIP ratio from the Changjiang River could have a significant impact on Chlorophyll-α (Chl-α) concentration in the surface sea water and dissolved oxygen (DO) concentration in the bottom sea water of the Changjiang estuary. Our study showed that the DIN:DIP ratio from the Changjiang River increased from 76 to 384 between 1970 and 1985, and decreased from 255 to 149 between 1986 and 2013. The observed Chl-α concentration increased by 146% from 1992 to 2010 in the Changjiang estuary, and was negatively related to the DIN:DIP ratio in 1992-2010. Bottom sea water DO concentration decreased by 24.6% during 1992-2010 and a "low oxygen zone" (122°∼123°E, 32°∼33°N) was observed during summer since 1999. The anthropogenically enhanced nutrient inputs dominated river DIN and DIP fluxes and influenced Chl-α concentrations as well as bottom DO concentrations in the estuary. Scenarios emphasizing global collaboration and proactive environmental problem-solving may result in reductions in the river nutrient exports and in Chl-α and DO concentration in the Changjiang estuary by 2050.

  18. The chlorophyll-deficient golden leaf mutation in cucumber is due to a single nucleotide substitution in CsChlI for magnesium chelatase I subunit.

    Science.gov (United States)

    Gao, Meiling; Hu, Liangliang; Li, Yuhong; Weng, Yiqun

    2016-10-01

    The cucumber chlorophyll-deficient golden leaf mutation is due to a single nucleotide substitution in the CsChlI gene for magnesium chelatase I subunit which plays important roles in the chlorophyll biosynthesis pathway. The Mg-chelatase catalyzes the insertion of Mg(2+) into the protoporphyrin IX in the chlorophyll biosynthesis pathway, which is a protein complex encompassing three subunits CHLI, CHLD, and CHLH. Chlorophyll-deficient mutations in genes encoding the three subunits have played important roles in understanding the structure, function and regulation of this important enzyme. In an EMS mutagenesis population, we identified a chlorophyll-deficient mutant C528 with golden leaf color throughout its development which was viable and able to set fruits and seeds. Segregation analysis in multiple populations indicated that this leaf color mutation was recessively inherited and the green color showed complete dominance over golden color. Map-based cloning identified CsChlI as the candidate gene for this mutation which encoded the CHLI subunit of cucumber Mg-chelatase. The 1757-bp CsChlI gene had three exons and a single nucleotide change (G to A) in its third exon resulted in an amino acid substitution (G269R) and the golden leaf color in C528. This mutation occurred in the highly conserved nucleotide-binding domain of the CHLI protein in which chlorophyll-deficient mutations have been frequently identified. The mutant phenotype, CsChlI expression pattern and the mutated residue in the CHLI protein suggested the mutant allele in C528 is unique among mutations identified so far in different species. This golden leaf mutant not only has its potential in cucumber breeding, but also provides a useful tool in understanding the CHLI function and its regulation in the chlorophyll biosynthesis pathway as well as chloroplast development.

  19. Prediction of multi-wake problems using an improved Jensen wake model

    DEFF Research Database (Denmark)

    Tian, Linlin; Zhu, Wei Jun; Shen, Wen Zhong

    2017-01-01

    The improved analytical wake model named as 2D_k Jensen model (which was proposed to overcome some shortcomes in the classical Jensen wake model) is applied and validated in this work for wind turbine multi-wake predictions. Different from the original Jensen model, this newly developed 2D_k Jensen...... model uses a cosine shape instead of the top-hat shape for the velocity deficit in the wake, and the wake decay rate as a variable that is related to the ambient turbulence as well as the rotor generated turbulence. Coupled with four different multi-wake combination models, the 2D_k Jensen model...... is assessed through (1) simulating two wakes interaction under full wake and partial wake conditions and (2) predicting the power production in the Horns Rev wind farm for different wake sectors around two different wind directions. Through comparisons with field measurements, results from Large Eddy...

  20. Research on the Wire Network Signal Prediction Based on the Improved NNARX Model

    Science.gov (United States)

    Zhang, Zipeng; Fan, Tao; Wang, Shuqing

    It is difficult to obtain accurately the wire net signal of power system's high voltage power transmission lines in the process of monitoring and repairing. In order to solve this problem, the signal measured in remote substation or laboratory is employed to make multipoint prediction to gain the needed data. But, the obtained power grid frequency signal is delay. In order to solve the problem, an improved NNARX network which can predict frequency signal based on multi-point data collected by remote substation PMU is describes in this paper. As the error curved surface of the NNARX network is more complicated, this paper uses L-M algorithm to train the network. The result of the simulation shows that the NNARX network has preferable predication performance which provides accurate real time data for field testing and maintenance.

  1. Risk prediction is improved by adding markers of subclinical organ damage to SCORE

    DEFF Research Database (Denmark)

    Sehestedt, Thomas; Jeppesen, Jørgen; Hansen, Tine W

    2010-01-01

    cardiovascular, anti-diabetic, or lipid-lowering treatment, aged 41, 51, 61, or 71 years, we measured traditional cardiovascular risk factors, left ventricular (LV) mass index, atherosclerotic plaques in the carotid arteries, carotid/femoral pulse wave velocity (PWV), and urine albumin/creatinine ratio (UACR......) and followed them for a median of 12.8 years. Eighty-one subjects died because of cardiovascular causes. Risk of cardiovascular death was independently of SCORE associated with LV hypertrophy [hazard ratio (HR) 2.2 (95% CI 1.2-4.0)], plaques [HR 2.5 (1.6-4.0)], UACR > or = 90th percentile [HR 3.3 (1.......07). CONCLUSION: Subclinical organ damage predicted cardiovascular death independently of SCORE and the combination may improve risk prediction....

  2. Merging economics and epidemiology to improve the prediction and management of infectious disease.

    Science.gov (United States)

    Perrings, Charles; Castillo-Chavez, Carlos; Chowell, Gerardo; Daszak, Peter; Fenichel, Eli P; Finnoff, David; Horan, Richard D; Kilpatrick, A Marm; Kinzig, Ann P; Kuminoff, Nicolai V; Levin, Simon; Morin, Benjamin; Smith, Katherine F; Springborn, Michael

    2014-12-01

    Mathematical epidemiology, one of the oldest and richest areas in mathematical biology, has significantly enhanced our understanding of how pathogens emerge, evolve, and spread. Classical epidemiological models, the standard for predicting and managing the spread of infectious disease, assume that contacts between susceptible and infectious individuals depend on their relative frequency in the population. The behavioral factors that underpin contact rates are not generally addressed. There is, however, an emerging a class of models that addresses the feedbacks between infectious disease dynamics and the behavioral decisions driving host contact. Referred to as "economic epidemiology" or "epidemiological economics," the approach explores the determinants of decisions about the number and type of contacts made by individuals, using insights and methods from economics. We show how the approach has the potential both to improve predictions of the course of infectious disease, and to support development of novel approaches to infectious disease management.

  3. Considering Organic Carbon for Improved Predictions of Clay Content from Water Vapor Sorption

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Tuller, Markus; Moldrup, Per

    2014-01-01

    Accurate determination of the soil clay fraction (CF) is of crucial importance for characterization of numerous environmental, agricultural, and engineering processes. Because traditional methods for measurement of the CF are laborious and susceptible to errors, regression models relating the CF...... to water vapor sorption isotherms that can be rapidly measured with a fully automated vapor sorption analyzer are a viable alternative. In this presentation we evaluate the performance of recently developed regression models based on comparison with standard CF measurements for soils with high organic...... carbon (OC) content and propose a modification to improve prediction accuracy. Evaluation of the CF prediction accuracy for 29 soils with clay contents ranging from 6 to 25% and with OC contents from 2.0 to 8.4% showed that the models worked reasonably well for all soils when the OC content was below 2...

  4. Variability in Cadence During Forced Cycling Predicts Motor Improvement in Individuals With Parkinson’s Disease

    Science.gov (United States)

    Ridgel, Angela L.; Abdar, Hassan Mohammadi; Alberts, Jay L.; Discenzo, Fred M.; Loparo, Kenneth A.

    2014-01-01

    Variability in severity and progression of Parkinson’s disease symptoms makes it challenging to design therapy interventions that provide maximal benefit. Previous studies showed that forced cycling, at greater pedaling rates, results in greater improvements in motor function than voluntary cycling. The precise mechanism for differences in function following exercise is unknown. We examined the complexity of biomechanical and physiological features of forced and voluntary cycling and correlated these features to improvements in motor function as measured by the Unified Parkinson’s Disease Rating Scale (UPDRS). Heart rate, cadence, and power were analyzed using entropy signal processing techniques. Pattern variability in heart rate and power were greater in the voluntary group when compared to forced group. In contrast, variability in cadence was higher during forced cycling. UPDRS Motor III scores predicted from the pattern variability data were highly correlated to measured scores in the forced group. This study shows how time series analysis methods of biomechanical and physiological parameters of exercise can be used to predict improvements in motor function. This knowledge will be important in the development of optimal exercise-based rehabilitation programs for Parkinson’s disease. PMID:23144045

  5. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    Science.gov (United States)

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Predictive modelling of interventions to improve iodine intake in New Zealand.

    Science.gov (United States)

    Schiess, Sonja; Cressey, Peter J; Thomson, Barbara M

    2012-10-01

    The potential effects of four interventions to improve iodine intakes of six New Zealand population groups are assessed. A model was developed to estimate iodine intake when (i) bread is manufactured with or without iodized salt, (ii) recommended foods are consumed to augment iodine intake, (iii) iodine supplementation as recommended for pregnant women is taken and (iv) the level of iodization for use in bread manufacture is doubled from 25-65 mg to 100 mg iodine/kg salt. New Zealanders have low and decreasing iodine intakes and low iodine status. Predictive modelling is a useful tool to assess the likely impact, and potential risk, of nutrition interventions. Food consumption information was sourced from 24 h diet recall records for 4576 New Zealanders aged over 5 years. Most consumers (73-100 %) are predicted to achieve an adequate iodine intake when salt iodized at 25-65 mg iodine/kg salt is used in bread manufacture, except in pregnant females of whom 37 % are likely to meet the estimated average requirement. Current dietary advice to achieve estimated average requirements is challenging for some consumers. Pregnant women are predicted to achieve adequate but not excessive iodine intakes when 150 μg of supplemental iodine is taken daily, assuming iodized salt in bread. The manufacture of bread with iodized salt and supplemental iodine for pregnant women are predicted to be effective interventions to lift iodine intakes in New Zealand. Current estimations of iodine intake will be improved with information on discretionary salt and supplemental iodine usage.

  7. Improved predictability of droughts over southern Africa using the standardized precipitation evapotranspiration index and ENSO

    Science.gov (United States)

    Manatsa, Desmond; Mushore, Terrence; Lenouo, Andre

    2017-01-01

    The provision of timely and reliable climate information on which to base management decisions remains a critical component in drought planning for southern Africa. In this observational study, we have not only proposed a forecasting scheme which caters for timeliness and reliability but improved relevance of the climate information by using a novel drought index called the standardised precipitation evapotranspiration index (SPEI), instead of the traditional precipitation only based index, the standardised precipitation index (SPI). The SPEI which includes temperature and other climatic factors in its construction has a more robust connection to ENSO than the SPI. Consequently, the developed ENSO-SPEI prediction scheme can provide quantitative information about the spatial extent and severity of predicted drought conditions in a way that reflects more closely the level of risk in the global warming context of the sub region. However, it is established that the ENSO significant regional impact is restricted only to the period December-March, implying a revisit to the traditional ENSO-based forecast scheme which essentially divides the rainfall season into the two periods, October to December and January to March. Although the prediction of ENSO events has increased with the refinement of numerical models, this work has demonstrated that the prediction of drought impacts related to ENSO is also a reality based only on observations. A large temporal lag is observed between the development of ENSO phenomena (typically in May of the previous year) and the identification of regional SPEI defined drought conditions. It has been shown that using the Southern Africa Regional Climate Outlook Forum's (SARCOF) traditional 3-month averaged Nino 3.4 SST index (June to August) as a predictor does not have an added advantage over using only the May SST index values. In this regard, the extended lead time and improved skill demonstrated in this study could immensely benefit

  8. Effects of the combinations of caffeine with 137Cs-gamma rays or tritiated water on the proliferation and malignant transformation CHL-1 cells

    International Nuclear Information System (INIS)

    Zou Shuai; Wang Shoufang

    1989-01-01

    The effects of the combinations of caffeine with 137 Cs-gamma rays or tritiated water on the proliferation and malignant transformation in vitro in CHL-1 cells were observed in experiments. At the concentrations of caffeine from 1 mmol/L to 2 mmol/L, the dose ranges of 137 Cs-gamma rays from 0.837 Gy and to 2.51 Gy and of tritium-beta radiation from 0.837 Gy to 0.528 Gy, the cell proliferation of CHL-1 cells was found to be inbigited when cells were exposed to caffeine, gamma and beta radiations, respectively, as well as when they were exposed to various combinations of caffeine with the two latters. The degree of inhibition of cell proliferation was dependent upon the concentration of caffeine and on the doses of radiation. In the transformation experiments, cell malignant transformation rates for all treated groups were higher than that for contol group and the rates for irradiated plus caffeine-treated groups were higher than those for corresponding single-agent-treated ones. After the subcutaneous injection of transformed cells into irradiated mice, tumours in size of about 2 mm 3 were found in some animals and the tumour cells were identical with in-vitro-transformed CHL-1 cells histopathologically

  9. Prostate Health Index improves multivariable risk prediction of aggressive prostate cancer.

    Science.gov (United States)

    Loeb, Stacy; Shin, Sanghyuk S; Broyles, Dennis L; Wei, John T; Sanda, Martin; Klee, George; Partin, Alan W; Sokoll, Lori; Chan, Daniel W; Bangma, Chris H; van Schaik, Ron H N; Slawin, Kevin M; Marks, Leonard S; Catalona, William J

    2017-07-01

    To examine the use of the Prostate Health Index (PHI) as a continuous variable in multivariable risk assessment for aggressive prostate cancer in a large multicentre US study. The study population included 728 men, with prostate-specific antigen (PSA) levels of 2-10 ng/mL and a negative digital rectal examination, enrolled in a prospective, multi-site early detection trial. The primary endpoint was aggressive prostate cancer, defined as biopsy Gleason score ≥7. First, we evaluated whether the addition of PHI improves the performance of currently available risk calculators (the Prostate Cancer Prevention Trial [PCPT] and European Randomised Study of Screening for Prostate Cancer [ERSPC] risk calculators). We also designed and internally validated a new PHI-based multivariable predictive model, and created a nomogram. Of 728 men undergoing biopsy, 118 (16.2%) had aggressive prostate cancer. The PHI predicted the risk of aggressive prostate cancer across the spectrum of values. Adding PHI significantly improved the predictive accuracy of the PCPT and ERSPC risk calculators for aggressive disease. A new model was created using age, previous biopsy, prostate volume, PSA and PHI, with an area under the curve of 0.746. The bootstrap-corrected model showed good calibration with observed risk for aggressive prostate cancer and had net benefit on decision-curve analysis. Using PHI as part of multivariable risk assessment leads to a significant improvement in the detection of aggressive prostate cancer, potentially reducing harms from unnecessary prostate biopsy and overdiagnosis. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  10. Has growth mixture modeling improved our understanding of how early change predicts psychotherapy outcome?

    Science.gov (United States)

    Koffmann, Andrew

    2017-03-02

    Early change in psychotherapy predicts outcome. Seven studies have used growth mixture modeling [GMM; Muthén, B. (2001). Second-generation structural equation modeling with a combination of categorical and continuous latent variables: New opportunities for latent class-latent growth modeling. In L. M. Collins & A. G. Sawyers (Eds.), New methods for the analysis of change (pp. 291-322). Washington, DC: American Psychological Association] to identify patient classes based on early change but have yielded conflicting results. Here, we review the earlier studies and apply GMM to a new data set. In a university-based training clinic, 251 patients were administered the Outcome Questionnaire-45 [Lambert, M. J., Hansen, N. B., Umphress, V., Lunnen, K., Okiishi, J., Burlingame, G., … Reisinger, C. W. (1996). Administration and scoring manual for the Outcome Questionnaire (OQ 45.2). Wilmington, DE: American Professional Credentialing Services] at each psychotherapy session. We used GMM to identify class structure based on change in the first six sessions and examined trajectories as predictors of outcome. The sample was best described as a single class. There was no evidence of autoregressive trends in the data. We achieved better fit to the data by permitting latent variables some degree of kurtosis, rather than to assume multivariate normality. Treatment outcome was predicted by the amount of early improvement, regardless of initial level of distress. The presence of sudden early gains or losses did not further improve outcome prediction. Early improvement is an easily computed, powerful predictor of psychotherapy outcome. The use of GMM to investigate the relationship between change and outcome is technically complex and computationally intensive. To date, it has not been particularly informative.

  11. Better estimation of protein-DNA interaction parameters improve prediction of functional sites

    Directory of Open Access Journals (Sweden)

    O'Flanagan Ruadhan A

    2008-12-01

    Full Text Available Abstract Background Characterizing transcription factor binding motifs is a common bioinformatics task. For transcription factors with variable binding sites, we need to get many suboptimal binding sites in our training dataset to get accurate estimates of free energy penalties for deviating from the consensus DNA sequence. One procedure to do that involves a modified SELEX (Systematic Evolution of Ligands by Exponential Enrichment method designed to produce many such sequences. Results We analyzed low stringency SELEX data for E. coli Catabolic Activator Protein (CAP, and we show here that appropriate quantitative analysis improves our ability to predict in vitro affinity. To obtain large number of sequences required for this analysis we used a SELEX SAGE protocol developed by Roulet et al. The sequences obtained from here were subjected to bioinformatic analysis. The resulting bioinformatic model characterizes the sequence specificity of the protein more accurately than those sequence specificities predicted from previous analysis just by using a few known binding sites available in the literature. The consequences of this increase in accuracy for prediction of in vivo binding sites (and especially functional ones in the E. coli genome are also discussed. We measured the dissociation constants of several putative CAP binding sites by EMSA (Electrophoretic Mobility Shift Assay and compared the affinities to the bioinformatics scores provided by methods like the weight matrix method and QPMEME (Quadratic Programming Method of Energy Matrix Estimation trained on known binding sites as well as on the new sites from SELEX SAGE data. We also checked predicted genome sites for conservation in the related species S. typhimurium. We found that bioinformatics scores based on SELEX SAGE data does better in terms of prediction of physical binding energies as well as in detecting functional sites. Conclusion We think that training binding site detection

  12. An improved model to predict nonuniform deformation of Zr-2.5 Nb pressure tubes

    International Nuclear Information System (INIS)

    Lei, Q.M.; Fan, H.Z.

    1997-01-01

    Present circular pressure-tube ballooning models in most fuel channel codes assume that the pressure tube remains circular during ballooning. This model provides adequate predictions of pressure-tube ballooning behaviour when the pressure tube (PT) and the calandria tube (CT) are concentric and when a small (<100 degrees C) top-to-bottom circumferential temperature gradient is present on the pressure tube. However, nonconcentric ballooning is expected to occur under certain postulated CANDU (CANada Deuterium Uranium) accident conditions. This circular geometry assumption prevents the model from accurately predicting nonuniform pressure-tube straining and local PT/CT contact when the pressure tube is subjected to a large circumferential temperature gradient and consequently deforms in a noncircular pattern. This paper describes an improved model that predicts noncircular pressure-tube deformation. Use of this model (once fully validated) will reduce uncertainties in the prediction of pressure-tube ballooning during a postulated loss-of-coolant accident (LOCA) in a CANDU reactor. The noncircular deformation model considers a ring or cross-section of a pressure tube with unit axial length to calculate deformation in the radial and circumferential directions. The model keeps track of the thinning of the pressure-tube wall as well as the shape deviation from a reference circle. Such deviation is expressed in a cosine Fourier series for the lateral symmetry case. The coefficients of the series for the first m terms are calculated by solving a set of algebraic equations at each time step. The model also takes into account the effects of pressure-tube sag or bow on ballooning, using an input value of the offset distance between the centre of the calandria tube and the initial centre of the pressure tube for determining the position radius of the pressure tube. One significant improvement realized in using the noncircular deformation model is a more accurate prediction in

  13. Improvement of prediction ability for genomic selection of dairy cattle by including dominance effects.

    Directory of Open Access Journals (Sweden)

    Chuanyu Sun

    Full Text Available Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs. The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both

  14. Improved Predictions of the Geographic Distribution of Invasive Plants Using Climatic Niche Models

    Science.gov (United States)

    Ramírez-Albores, Jorge E.; Bustamante, Ramiro O.

    2016-01-01

    naturally established individuals because this improves the accuracy of predictions about their distribution ranges. PMID:27195983

  15. Mid- and long-term runoff predictions by an improved phase-space reconstruction model

    International Nuclear Information System (INIS)

    Hong, Mei; Wang, Dong; Wang, Yuankun; Zeng, Xiankui; Ge, Shanshan; Yan, Hengqian; Singh, Vijay P.

    2016-01-01

    In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated

  16. Open Access Platforms in Spinal Cord Injury: Existing Clinical Trial Data to Predict and Improve Outcomes.

    Science.gov (United States)

    Kramer, John L K; Geisler, Fred; Ramer, Leanne; Plunet, Ward; Cragg, Jacquelyn J

    2017-05-01

    Recovery from acute spinal cord injury (SCI) is characterized by extensive heterogeneity, resulting in uncertain prognosis. Reliable prediction of recovery in the acute phase benefits patients and their families directly, as well as improves the likelihood of detecting efficacy in clinical trials. This issue of heterogeneity is not unique to SCI. In fields such as traumatic brain injury, Parkinson's disease, and amyotrophic lateral sclerosis, one approach to understand variability in recovery has been to make clinical trial data widely available to the greater research community. We contend that the SCI community should adopt a similar approach in providing open access clinical trial data.

  17. The improvement of MOSFET prediction in space environments using the conversion model

    International Nuclear Information System (INIS)

    Shvetzov-Shilovsky, I.N.; Cherepko, S.V.; Pershenkov, V.S.

    1994-01-01

    The modeling of MOS device response to a low dose rate irradiation has been performed. The existing conversion model based on the linear dependence between positive oxide charge annealing and interface trap buildup accurately predicts the long time response of MOSFETs with relatively thick oxides but overestimates the threshold voltage shift for radiation hardened MOSFETs with thin oxides. To give an explanation to this fact, the authors investigate the impulse response function for threshold voltage. A revised model, which incorporates the different energy levels of hole traps in the oxide improves the fit between the model and data and gives an explanation to the fitting parameters dependence on oxide field

  18. Mid- and long-term runoff predictions by an improved phase-space reconstruction model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Mei [Research Center of Ocean Environment Numerical Simulation, Institute of Meteorology and oceanography, PLA University of Science and Technology, Nanjing (China); Wang, Dong, E-mail: wangdong@nju.edu.cn [Key Laboratory of Surficial Geochemistry, Ministry of Education, Department of Hydrosciences, School of Earth Sciences and Engineering, Collaborative Innovation Center of South China Sea Studies, State Key Laboratory of Pollution Control and Resource Reuse, Nanjing University, Nanjing 210093 (China); Wang, Yuankun; Zeng, Xiankui [Key Laboratory of Surficial Geochemistry, Ministry of Education, Department of Hydrosciences, School of Earth Sciences and Engineering, Collaborative Innovation Center of South China Sea Studies, State Key Laboratory of Pollution Control and Resource Reuse, Nanjing University, Nanjing 210093 (China); Ge, Shanshan; Yan, Hengqian [Research Center of Ocean Environment Numerical Simulation, Institute of Meteorology and oceanography, PLA University of Science and Technology, Nanjing (China); Singh, Vijay P. [Department of Biological and Agricultural Engineering Zachry Department of Civil Engineering, Texas A & M University, College Station, TX 77843 (United States)

    2016-07-15

    In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated

  19. Improvement of bottom-quark associated Higgs-boson production predictions for LHC using HERA data

    Energy Technology Data Exchange (ETDEWEB)

    Andrii, Gizhko; Achim, Geiser [Deutsches Elektronen-Synchrotron, Hamburg (Germany)

    2016-07-01

    The dependence of the inclusive total cross section of the bottom-quark associated Higgs-boson production predictions at the LHC, pp → (b anti b)H+X on the treatment of the beauty quark mass is studied in the context of CMS measurements. For two different schemes (four flavour scheme (4FS) and five flavour scheme (5FS)) the theoretical uncertainty due to the beauty quark mass is estimated, and the potential improvement arising from a QCD analysis of HERA beauty data is demonstrated.

  20. Sharing reference data and including cows in the reference population improve genomic predictions in Danish Jersey.

    Science.gov (United States)

    Su, G; Ma, P; Nielsen, U S; Aamand, G P; Wiggans, G; Guldbrandtsen, B; Lund, M S

    2016-06-01

    Small reference populations limit the accuracy of genomic prediction in numerically small breeds, such like Danish Jersey. The objective of this study was to investigate two approaches to improve genomic prediction by increasing size of reference population in Danish Jersey. The first approach was to include North American Jersey bulls in Danish Jersey reference population. The second was to genotype cows and use them as reference animals. The validation of genomic prediction was carried out on bulls and cows, respectively. In validation on bulls, about 300 Danish bulls (depending on traits) born in 2005 and later were used as validation data, and the reference populations were: (1) about 1050 Danish bulls, (2) about 1050 Danish bulls and about 1150 US bulls. In validation on cows, about 3000 Danish cows from 87 young half-sib families were used as validation data, and the reference populations were: (1) about 1250 Danish bulls, (2) about 1250 Danish bulls and about 1150 US bulls, (3) about 1250 Danish bulls and about 4800 cows, (4) about 1250 Danish bulls, 1150 US bulls and 4800 Danish cows. Genomic best linear unbiased prediction model was used to predict breeding values. De-regressed proofs were used as response variables. In the validation on bulls for eight traits, the joint DK-US bull reference population led to higher reliability of genomic prediction than the DK bull reference population for six traits, but not for fertility and longevity. Averaged over the eight traits, the gain was 3 percentage points. In the validation on cows for six traits (fertility and longevity were not available), the gain from inclusion of US bull in reference population was 6.6 percentage points in average over the six traits, and the gain from inclusion of cows was 8.2 percentage points. However, the gains from cows and US bulls were not accumulative. The total gain of including both US bulls and Danish cows was 10.5 percentage points. The results indicate that sharing reference

  1. Model predictive control as a tool for improving the process operation of MSW combustion plants

    International Nuclear Information System (INIS)

    Leskens, M.; Kessel, L.B.M. van; Bosgra, O.H.

    2005-01-01

    In this paper a feasibility study is presented on the application of the advanced control strategy called model predictive control (MPC) as a tool for obtaining improved process operation performance for municipal solid waste (MSW) combustion plants. The paper starts with a discussion of the operational objectives and control of such plants, from which a motivation follows for applying MPC to them. This is followed by a discussion on the basic idea behind this advanced control strategy. After that, an MPC-based combustion control system is proposed aimed at tackling a typical MSW combustion control problem and, using this proposed control system, an assessment is made of the improvement in performance that an MPC-based MSW combustion control system can provide in comparison to conventional MSW combustion control systems. This assessment is based on simulations using an experimentally obtained process and disturbance model of a real-life large-scale MSW combustion plant

  2. The film boiling look-up table: an improvement in predicting post-chf temperatures

    International Nuclear Information System (INIS)

    Groeneveld, D.C.; Leung, L.K.H.; Vasic, A.Z.; Guo, Y.J.; El Nakla, M.; Cheng, S.C.

    2002-01-01

    During the past 50 years more than 60 film boiling prediction methods have been proposed (Groeneveld and Leung, 2000). These prediction methods generally are applicable over limited ranges of flow conditions and do not provide reasonable predictions when extrapolated well outside the range of their respective database. Leung et al. (1996, 1997) and Kirillov et al. (1996) have proposed the use of a film-boiling look-up table as an alternative to the many models, equations and correlations for the inverted annular film boiling (IAFB) and the dispersed flow film-boiling (DFFB) regime. The film-boiling look-up table is a logical follow-up to the development of the successful CHF look-up table (Groeneveld et al., 1996). It is basically a normalized data bank of heat-transfer coefficients for discrete values of pressure, mass flux, quality and heat flux or surface-temperature. The look-up table proposed by Leung et al. (1996, 1997), and referred to as PDO-LW-96, was based on 14,687 data and predicted the surface temperature with an average error of 1.2% and an rms error of 6.73%. The heat-transfer coefficient was predicted with an average error of -4.93% and an rms error of 16.87%. Leung et al. clearly showed that the look-up table approach, as a general predictive tool for film-boiling heat transfer, was superior to the correlation or model approach. Error statistics were not provided for the look-up table proposed by Kirillov et al. (1996). This paper reviews the look-up table approach and describes improvements to the derivation of the film-boiling look-up table. These improvements include: (i) a larger data base, (ii) a wider range of thermodynamic qualities, (iii) use of the wall temperature instead of the heat flux as an independent parameter, (iv) employment of fully-developed film-boiling data only for the derivation of the look-up table, (v) a finer subdivision and thus more table entries, (vi) smoother table, and (vii) use of the best of five prediction methods

  3. Representing leaf and root physiological traits in CLM improves global carbon and nitrogen cycling predictions

    Science.gov (United States)

    Ghimire, Bardan; Riley, William J.; Koven, Charles D.; Mu, Mingquan; Randerson, James T.

    2016-06-01

    In many ecosystems, nitrogen is the most limiting nutrient for plant growth and productivity. However, current Earth System Models (ESMs) do not mechanistically represent functional nitrogen allocation for photosynthesis or the linkage between nitrogen uptake and root traits. The current version of CLM (4.5) links nitrogen availability and plant productivity via (1) an instantaneous downregulation of potential photosynthesis rates based on soil mineral nitrogen availability, and (2) apportionment of soil nitrogen between plants and competing nitrogen consumers assumed to be proportional to their relative N demands. However, plants do not photosynthesize at potential rates and then downregulate; instead photosynthesis rates are governed by nitrogen that has been allocated to the physiological processes underpinning photosynthesis. Furthermore, the role of plant roots in nutrient acquisition has also been largely ignored in ESMs. We therefore present a new plant nitrogen model for CLM4.5 with (1) improved representations of linkages between leaf nitrogen and plant productivity based on observed relationships in a global plant trait database and (2) plant nitrogen uptake based on root-scale Michaelis-Menten uptake kinetics. Our model improvements led to a global bias reduction in GPP, LAI, and biomass of 70%, 11%, and 49%, respectively. Furthermore, water use efficiency predictions were improved conceptually, qualitatively, and in magnitude. The new model's GPP responses to nitrogen deposition, CO2 fertilization, and climate also differed from the baseline model. The mechanistic representation of leaf-level nitrogen allocation and a theoretically consistent treatment of competition with belowground consumers led to overall improvements in global carbon cycling predictions.

  4. Improving prediction of fall risk among nursing home residents using electronic medical records.

    Science.gov (United States)

    Marier, Allison; Olsho, Lauren E W; Rhodes, William; Spector, William D

    2016-03-01

    Falls are physically and financially costly, but may be preventable with targeted intervention. The Minimum Data Set (MDS) is one potential source of information on fall risk factors among nursing home residents, but its limited breadth and relatively infrequent updates may limit its practical utility. Richer, more frequently updated data from electronic medical records (EMRs) may improve ability to identify individuals at highest risk for falls. The authors applied a repeated events survival model to analyze MDS 3.0 and EMR data for 5129 residents in 13 nursing homes within a single large California chain that uses a centralized EMR system from a leading vendor. Estimated regression parameters were used to project resident fall probability. The authors examined the proportion of observed falls within each projected fall risk decile to assess improvements in predictive power from including EMR data. In a model incorporating fall risk factors from the MDS only, 28.6% of observed falls occurred among residents in the highest projected risk decile. In an alternative specification incorporating more frequently updated measures for the same risk factors from the EMR data, 32.3% of observed falls occurred among residents in the highest projected risk decile, a 13% increase over the base MDS-only specification. Incorporating EMR data improves ability to identify those at highest risk for falls relative to prediction using MDS data alone. These improvements stem chiefly from the greater frequency with which EMR data are updated, with minimal additional gains from availability of additional risk factor variables. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry.

    Science.gov (United States)

    Mozer, M C; Wolniewicz, R; Grimes, D B; Johnson, E; Kaushansky, H

    2000-01-01

    Competition in the wireless telecommunications industry is fierce. To maintain profitability, wireless carriers must control churn, which is the loss of subscribers who switch from one carrier to another.We explore techniques from statistical machine learning to predict churn and, based on these predictions, to determine what incentives should be offered to subscribers to improve retention and maximize profitability to the carrier. The techniques include logit regression, decision trees, neural networks, and boosting. Our experiments are based on a database of nearly 47,000 U.S. domestic subscribers and includes information about their usage, billing, credit, application, and complaint history. Our experiments show that under a wide variety of assumptions concerning the cost of intervention and the retention rate resulting from intervention, using predictive techniques to identify potential churners and offering incentives can yield significant savings to a carrier. We also show the importance of a data representation crafted by domain experts. Finally, we report on a real-world test of the techniques that validate our simulation experiments.

  6. Improved prediction of meat and bone meal metabolizable energy content for ducks through in vitro methods.

    Science.gov (United States)

    Garcia, R A; Phillips, J G; Adeola, O

    2012-08-01

    Apparent metabolizable energy (AME) of meat and bone meal (MBM) for poultry is highly variable, but impractical to measure routinely. Previous efforts at developing an in vitro method for predicting AME have had limited success. The present study uses data from a previous publication on the AME of 12 MBM samples, determined using 288 White Pekin ducks, as well as composition data on these samples. Here, we investigate the hypothesis that 2 noncompositional attributes of MBM, particle size and protease resistance, will have utility in improving predictions of AME based on in vitro measurements. Using the same MBM samples as the previous study, 2 measurements of particle size were recorded and protease resistance was determined using a modified pepsin digestibility assay. Analysis of the results using a stepwise construction of multiple linear regression models revealed that the measurements of particle size were useful in building models for AME, but the measure of protease resistance was not. Relatively simple (4-term) and complex (7-term) models for both AME and nitrogen-corrected AME were constructed, with R-squared values ranging from 0.959 to 0.996. The rather minor analytical effort required to conduct the measurements involved is discussed. Although the generality of the results are limited by the number of samples involved and the species used, they suggest that AME for poultry can be accurately predicted through simple and inexpensive in vitro methods.

  7. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Science.gov (United States)

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  8. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  9. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Directory of Open Access Journals (Sweden)

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  10. Congestion Prediction Modeling for Quality of Service Improvement in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ga-Won Lee

    2014-04-01

    Full Text Available Information technology (IT is pushing ahead with drastic reforms of modern life for improvement of human welfare. Objects constitute “Information Networks” through smart, self-regulated information gathering that also recognizes and controls current information states in Wireless Sensor Networks (WSNs. Information observed from sensor networks in real-time is used to increase quality of life (QoL in various industries and daily life. One of the key challenges of the WSNs is how to achieve lossless data transmission. Although nowadays sensor nodes have enhanced capacities, it is hard to assure lossless and reliable end-to-end data transmission in WSNs due to the unstable wireless links and low hard ware resources to satisfy high quality of service (QoS requirements. We propose a node and path traffic prediction model to predict and minimize the congestion. This solution includes prediction of packet generation due to network congestion from both periodic and event data generation. Simulation using NS-2 and Matlab is used to demonstrate the effectiveness of the proposed solution.

  11. Mutations in gp41 are correlated with coreceptor tropism but do not improve prediction methods substantially.

    Science.gov (United States)

    Thielen, Alexander; Lengauer, Thomas; Swenson, Luke C; Dong, Winnie W Y; McGovern, Rachel A; Lewis, Marilyn; James, Ian; Heera, Jayvant; Valdez, Hernan; Harrigan, P Richard

    2011-01-01

    The main determinants of HIV-1 coreceptor usage are located in the V3-loop of gp120, although mutations in V2 and gp41 are also known. Incorporation of V2 is known to improve prediction algorithms; however, this has not been confirmed for gp41 mutations. Samples with V3 and gp41 genotypes and Trofile assay (Monogram Biosciences, South San Francisco, CA, USA) results were taken from the HOMER cohort (n=444) and from patients screened for the MOTIVATE studies (n=1,916; 859 with maraviroc outcome data). Correlations of mutations with tropism were assessed using Fisher's exact test and prediction models trained using support vector machines. Models were validated by cross-validation, by testing models from one dataset on the other, and by analysing virological outcome. Several mutations within gp41 were highly significant for CXCR4 usage; most strikingly an insertion occurring in 7.7% of HOMER-R5 and 46.3% of HOMER-X4 samples (MOTIVATE 5.7% and 25.2%, respectively). Models trained on gp41 sequence alone achieved relatively high areas under the receiver-operating characteristic curve (AUCs; HOMER 0.713 and MOTIVATE 0.736) that were almost as good as V3 models (0.773 and 0.884, respectively). However, combining the two regions improved predictions only marginally (0.813 and 0.902, respectively). Similar results were found when models were trained on HOMER and validated on MOTIVATE or vice versa. The difference in median log viral load decrease at week 24 between patients with R5 and X4 virus was 1.65 (HOMER 2.45 and MOTIVATE 0.79) for V3 models, 1.59 for gp41-models (2.42 and 0.83, respectively) and 1.58 for the combined predictor (2.44 and 0.86, respectively). Several mutations within gp41 showed strong correlation with tropism in two independent datasets. However, incorporating gp41 mutations into prediction models is not mandatory because they do not improve substantially on models trained on V3 sequences alone.

  12. Prediction method of long-term reliability in improving residual stresses by means of surface finishing

    International Nuclear Information System (INIS)

    Sera, Takehiko; Hirano, Shinro; Chigusa, Naoki; Okano, Shigetaka; Saida, Kazuyoshi; Mochizuki, Masahito; Nishimoto, Kazutoshi

    2012-01-01

    Surface finishing methods, such as Water Jet Peening (WJP), have been applied to welds in some major components of nuclear power plants as a counter measure to Primary Water Stress Corrosion Cracking (PWSCC). In addition, the methods of surface finishing (buffing treatment) is being standardized, and thus the buffing treatment has been also recognized as the well-established method of improving stress. On the other hand, the long-term stability of peening techniques has been confirmed by accelerated test. However, the effectiveness of stress improvement by surface treatment is limited to thin layers and the effect of complicated residual stress distribution in the weld metal beneath the surface is not strictly taken into account for long-term stability. This paper, therefore, describes the accelerated tests, which confirmed that the long-term stability of the layer subjected to buffing treatment was equal to that subjected to WJP. The long-term reliability of very thin stress improved layer was also confirmed through a trial evaluation by thermal elastic-plastic creep analysis, even if the effect of complicated residual stress distribution in the weld metal was excessively taken into account. Considering the above findings, an approach is proposed for constructing the prediction method of the long-term reliability of stress improvement by surface finishing. (author)

  13. Understanding the origin of the solar cyclic activity for an improved earth climate prediction

    Science.gov (United States)

    Turck-Chièze, Sylvaine; Lambert, Pascal

    This review is dedicated to the processes which could explain the origin of the great extrema of the solar activity. We would like to reach a more suitable estimate and prediction of the temporal solar variability and its real impact on the Earth climatic models. The development of this new field is stimulated by the SoHO helioseismic measurements and by some recent solar modelling improvement which aims to describe the dynamical processes from the core to the surface. We first recall assumptions on the potential different solar variabilities. Then, we introduce stellar seismology and summarize the main SOHO results which are relevant for this field. Finally we mention the dynamical processes which are presently introduced in new solar models. We believe that the knowledge of two important elements: (1) the magnetic field interplay between the radiative zone and the convective zone and (2) the role of the gravity waves, would allow to understand the origin of the grand minima and maxima observed during the last millennium. Complementary observables like acoustic and gravity modes, radius and spectral irradiance from far UV to visible in parallel to the development of 1D-2D-3D simulations will improve this field. PICARD, SDO, DynaMICCS are key projects for a prediction of the next century variability. Some helioseismic indicators constitute the first necessary information to properly describe the Sun-Earth climatic connection.

  14. Improving diagnosis, prognosis and prediction by using biomarkers in CRC patients (Review).

    Science.gov (United States)

    Nikolouzakis, Taxiarchis Konstantinos; Vassilopoulou, Loukia; Fragkiadaki, Persefoni; Mariolis Sapsakos, Theodoros; Papadakis, Georgios Z; Spandidos, Demetrios A; Tsatsakis, Aristides M; Tsiaoussis, John

    2018-06-01

    Colorectal cancer (CRC) is among the most common cancers. In fact, it is placed in the third place among the most diagnosed cancer in men, after lung and prostate cancer, and in the second one for the most diagnosed cancer in women, following breast cancer. Moreover, its high mortality rates classifies it among the leading causes of cancer‑related death worldwide. Thus, in order to help clinicians to optimize their practice, it is crucial to introduce more effective tools that will improve not only early diagnosis, but also prediction of the most likely progression of the disease and response to chemotherapy. In that way, they will be able to decrease both morbidity and mortality of their patients. In accordance with that, colon cancer research has described numerous biomarkers for diagnostic, prognostic and predictive purposes that either alone or as part of a panel would help improve patient's clinical management. This review aims to describe the most accepted biomarkers among those proposed for use in CRC divided based on the clinical specimen that is examined (tissue, faeces or blood) along with their restrictions. Lastly, new insight in CRC monitoring will be discussed presenting promising emerging biomarkers (telomerase activity, telomere length and micronuclei frequency).

  15. Can assimilation of crowdsourced data in hydrological modelling improve flood prediction?

    Science.gov (United States)

    Mazzoleni, Maurizio; Verlaan, Martin; Alfonso, Leonardo; Monego, Martina; Norbiato, Daniele; Ferri, Miche; Solomatine, Dimitri P.

    2017-02-01

    Monitoring stations have been used for decades to properly measure hydrological variables and better predict floods. To this end, methods to incorporate these observations into mathematical water models have also been developed. Besides, in recent years, the continued technological advances, in combination with the growing inclusion of citizens in participatory processes related to water resources management, have encouraged the increase of citizen science projects around the globe. In turn, this has stimulated the spread of low-cost sensors to allow citizens to participate in the collection of hydrological data in a more distributed way than the classic static physical sensors do. However, two main disadvantages of such crowdsourced data are the irregular availability and variable accuracy from sensor to sensor, which makes them challenging to use in hydrological modelling. This study aims to demonstrate that streamflow data, derived from crowdsourced water level observations, can improve flood prediction if integrated in hydrological models. Two different hydrological models, applied to four case studies, are considered. Realistic (albeit synthetic) time series are used to represent crowdsourced data in all case studies. In this study, it is found that the data accuracies have much more influence on the model results than the irregular frequencies of data availability at which the streamflow data are assimilated. This study demonstrates that data collected by citizens, characterized by being asynchronous and inaccurate, can still complement traditional networks formed by few accurate, static sensors and improve the accuracy of flood forecasts.

  16. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    Science.gov (United States)

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Prediction of line failure fault based on weighted fuzzy dynamic clustering and improved relational analysis

    Science.gov (United States)

    Meng, Xiaocheng; Che, Renfei; Gao, Shi; He, Juntao

    2018-04-01

    With the advent of large data age, power system research has entered a new stage. At present, the main application of large data in the power system is the early warning analysis of the power equipment, that is, by collecting the relevant historical fault data information, the system security is improved by predicting the early warning and failure rate of different kinds of equipment under certain relational factors. In this paper, a method of line failure rate warning is proposed. Firstly, fuzzy dynamic clustering is carried out based on the collected historical information. Considering the imbalance between the attributes, the coefficient of variation is given to the corresponding weights. And then use the weighted fuzzy clustering to deal with the data more effectively. Then, by analyzing the basic idea and basic properties of the relational analysis model theory, the gray relational model is improved by combining the slope and the Deng model. And the incremental composition and composition of the two sequences are also considered to the gray relational model to obtain the gray relational degree between the various samples. The failure rate is predicted according to the principle of weighting. Finally, the concrete process is expounded by an example, and the validity and superiority of the proposed method are verified.

  18. An Improved User Selection Algorithm in Multiuser MIMO Broadcast with Channel Prediction

    Science.gov (United States)

    Min, Zhi; Ohtsuki, Tomoaki

    In multiuser MIMO-BC (Multiple-Input Multiple-Output Broadcasting) systems, user selection is important to achieve multiuser diversity. The optimal user selection algorithm is to try all the combinations of users to find the user group that can achieve the multiuser diversity. Unfortunately, the high calculation cost of the optimal algorithm prevents its implementation. Thus, instead of the optimal algorithm, some suboptimal user selection algorithms were proposed based on semiorthogonality of user channel vectors. The purpose of this paper is to achieve multiuser diversity with a small amount of calculation. For this purpose, we propose a user selection algorithm that can improve the orthogonality of a selected user group. We also apply a channel prediction technique to a MIMO-BC system to get more accurate channel information at the transmitter. Simulation results show that the channel prediction can improve the accuracy of channel information for user selections, and the proposed user selection algorithm achieves higher sum rate capacity than the SUS (Semiorthogonal User Selection) algorithm. Also we discuss the setting of the algorithm threshold. As the result of a discussion on the calculation complexity, which uses the number of complex multiplications as the parameter, the proposed algorithm is shown to have a calculation complexity almost equal to that of the SUS algorithm, and they are much lower than that of the optimal user selection algorithm.

  19. Evaluating and predicting the effectiveness of farmland consolidation on improving agricultural productivity in China

    Science.gov (United States)

    Xiang, Xiaomin; Gan, Le; Yang, Xuhong; Zhang, Zhihong; Zhou, Yinkang

    2018-01-01

    Food security has always been a focus issue in China. Farmland consolidation (FC) was regarded as a critical way to increase the quantity and improve the quality of farmland to ensure food security by Chinese government. FC projects have been nationwide launched, however few studies focused on evaluating the effectiveness of FC at a national scale. As such, an efficient way to evaluate the effectiveness of FC on improving agricultural productivity in China will be needed and it is critical for future national land consolidation planning. In this study, we selected 7505 FC projects completed between 2006 and 2013 with good quality Normalized Difference Vegetation Index (NDVI) as samples to evaluate the effectiveness of FC. We used time-series Moderate Resolution Imaging Spectroradiometer NDVI from 2001 to 2013, to extract four indicators to characterize agricultural productivity change of 4442 FC projects completed between 2006 and 2010, i.e., productivity level (PL), productivity variation (PV), productivity potential (PP), and multi-cropping index (MI). On this basis, we further predicted the same four characteristics for 3063 FC projects completed between 2011 and 2013, respectively, using Support Vector Machines (SVM). We found FC showed an overall effective status on improving agricultural productivity between 2006 and 2013 in China, especially on upgrading PL and improving PP. The positive effect was more prominent in the southeast and eastern China. It is noteworthy that 27.30% of all the 7505 projects were still ineffective on upgrading PL, the elementary improvement of agricultural productivity. Finally, we proposed that location-specific factors should be taken into consideration for launching FC projects and diverse financial sources are also needed for supporting FC. The results provide a reference for government to arrange FC projects reasonably and to formulate land consolidation planning in a proper way that better improve the effectiveness of FC

  20. Evaluating and predicting the effectiveness of farmland consolidation on improving agricultural productivity in China.

    Science.gov (United States)

    Fan, Yeting; Jin, Xiaobin; Xiang, Xiaomin; Gan, Le; Yang, Xuhong; Zhang, Zhihong; Zhou, Yinkang

    2018-01-01

    Food security has always been a focus issue in China. Farmland consolidation (FC) was regarded as a critical way to increase the quantity and improve the quality of farmland to ensure food security by Chinese government. FC projects have been nationwide launched, however few studies focused on evaluating the effectiveness of FC at a national scale. As such, an efficient way to evaluate the effectiveness of FC on improving agricultural productivity in China will be needed and it is critical for future national land consolidation planning. In this study, we selected 7505 FC projects completed between 2006 and 2013 with good quality Normalized Difference Vegetation Index (NDVI) as samples to evaluate the effectiveness of FC. We used time-series Moderate Resolution Imaging Spectroradiometer NDVI from 2001 to 2013, to extract four indicators to characterize agricultural productivity change of 4442 FC projects completed between 2006 and 2010, i.e., productivity level (PL), productivity variation (PV), productivity potential (PP), and multi-cropping index (MI). On this basis, we further predicted the same four characteristics for 3063 FC projects completed between 2011 and 2013, respectively, using Support Vector Machines (SVM). We found FC showed an overall effective status on improving agricultural productivity between 2006 and 2013 in China, especially on upgrading PL and improving PP. The positive effect was more prominent in the southeast and eastern China. It is noteworthy that 27.30% of all the 7505 projects were still ineffective on upgrading PL, the elementary improvement of agricultural productivity. Finally, we proposed that location-specific factors should be taken into consideration for launching FC projects and diverse financial sources are also needed for supporting FC. The results provide a reference for government to arrange FC projects reasonably and to formulate land consolidation planning in a proper way that better improve the effectiveness of FC.

  1. Big Data, Predictive Analytics, and Quality Improvement in Kidney Transplantation: A Proof of Concept.

    Science.gov (United States)

    Srinivas, T R; Taber, D J; Su, Z; Zhang, J; Mour, G; Northrup, D; Tripathi, A; Marsden, J E; Moran, W P; Mauldin, P D

    2017-03-01

    We sought proof of concept of a Big Data Solution incorporating longitudinal structured and unstructured patient-level data from electronic health records (EHR) to predict graft loss (GL) and mortality. For a quality improvement initiative, GL and mortality prediction models were constructed using baseline and follow-up data (0-90 days posttransplant; structured and unstructured for 1-year models; data up to 1 year for 3-year models) on adult solitary kidney transplant recipients transplanted during 2007-2015 as follows: Model 1: United Network for Organ Sharing (UNOS) data; Model 2: UNOS & Transplant Database (Tx Database) data; Model 3: UNOS, Tx Database & EHR comorbidity data; and Model 4: UNOS, Tx Database, EHR data, Posttransplant trajectory data, and unstructured data. A 10% 3-year GL rate was observed among 891 patients (2007-2015). Layering of data sources improved model performance; Model 1: area under the curve (AUC), 0.66; (95% confidence interval [CI]: 0.60, 0.72); Model 2: AUC, 0.68; (95% CI: 0.61-0.74); Model 3: AUC, 0.72; (95% CI: 0.66-077); Model 4: AUC, 0.84, (95 % CI: 0.79-0.89). One-year GL (AUC, 0.87; Model 4) and 3-year mortality (AUC, 0.84; Model 4) models performed similarly. A Big Data approach significantly adds efficacy to GL and mortality prediction models and is EHR deployable to optimize outcomes. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  2. ESLpred2: improved method for predicting subcellular localization of eukaryotic proteins

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2008-11-01

    Full Text Available Abstract Background The expansion of raw protein sequence databases in the post genomic era and availability of fresh annotated sequences for major localizations particularly motivated us to introduce a new improved version of our previously forged eukaryotic subcellular localizations prediction method namely "ESLpred". Since, subcellular localization of a protein offers essential clues about its functioning, hence, availability of localization predictor would definitely aid and expedite the protein deciphering studies. However, robustness of a predictor is highly dependent on the superiority of dataset and extracted protein attributes; hence, it becomes imperative to improve the performance of presently available method using latest dataset and crucial input features. Results Here, we describe augmentation in the prediction performance obtained for our most popular ESLpred method using new crucial features as an input to Support Vector Machine (SVM. In addition, recently available, highly non-redundant dataset encompassing three kingdoms specific protein sequence sets; 1198 fungi sequences, 2597 from animal and 491 plant sequences were also included in the present study. First, using the evolutionary information in the form of profile composition along with whole and N-terminal sequence composition as an input feature vector of 440 dimensions, overall accuracies of 72.7, 75.8 and 74.5% were achieved respectively after five-fold cross-validation. Further, enhancement in performance was observed when similarity search based results were coupled with whole and N-terminal sequence composition along with profile composition by yielding overall accuracies of 75.9, 80.8, 76.6% respectively; best accuracies reported till date on the same datasets. Conclusion These results provide confidence about the reliability and accurate prediction of SVM modules generated in the present study using sequence and profile compositions along with similarity search

  3. Repeated assessments of symptom severity improve predictions for risk of death among patients with cancer.

    Science.gov (United States)

    Sutradhar, Rinku; Atzema, Clare; Seow, Hsien; Earle, Craig; Porter, Joan; Barbera, Lisa

    2014-12-01

    Although prior studies show the importance of self-reported symptom scores as predictors of cancer survival, most are based on scores recorded at a single point in time. To show that information on repeated assessments of symptom severity improves predictions for risk of death and to use updated symptom information for determining whether worsening of symptom scores is associated with a higher hazard of death. This was a province-based longitudinal study of adult outpatients who had a cancer diagnosis and had assessments of symptom severity. We implemented a time-to-death Cox model with a time-varying covariate for each symptom to account for changing symptom scores over time. This model was compared with that using only a time-fixed (baseline) covariate for each symptom. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive performance of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. This study had 66,112 patients diagnosed with cancer and more than 310,000 assessments of symptoms. The use of repeated assessments of symptom scores improved predictions for risk of death compared with using only baseline symptom scores. Increased pain and fatigue and reduced appetite were the strongest predictors for death. If available, researchers should consider including changing information on symptom scores, as opposed to only baseline information on symptom scores, when examining hazard of death among patients with cancer. Worsening of pain, fatigue, and appetite may be a flag for impending death. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  4. Improved USLE-K factor prediction: A case study on water erosion areas in China

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2016-09-01

    Full Text Available Soil erodibility (K-factor is an essential factor in soil erosion prediction and conservation practises. The major obstacles to any accurate, large-scale soil erodibility estimation are the lack of necessary data on soil characteristics and the misuse of variable K-factor calculators. In this study, we assessed the performance of available erodibility estimators Universal Soil Loss Equation (USLE, Revised Universal Soil Loss Equation (RUSLE, Erosion Productivity Impact Calculator (EPIC and the Geometric Mean Diameter based (Dg model for different geographic regions based on the Chinese soil erodibility database (CSED. Results showed that previous estimators overestimated almost all K-values. Furthermore, only the USLE and Dg approaches could be directly and reliably applicable to black and loess soil regions. Based on the nonlinear best fitting techniques, we improved soil erodibility prediction by combining Dg and soil organic matter (SOM. The NSE, R2 and RE values were 0.94, 0.67 and 9.5% after calibrating the results independently; similar model performance was showed for the validation process. The results obtained via the proposed approach were more accurate that the former K-value predictions. Moreover, those improvements allowed us to effectively establish a regional soil erodibility map (1:250,000 scale of water erosion areas in China. The mean K-value of Chinese water erosion regions was 0.0321 (t ha h·(ha MJ mm−1 with a standard deviation of 0.0107 (t ha h·(ha MJ mm−1; K-values present a decreasing trend from North to South in water erosion areas in China. The yield soil erodibility dataset also satisfactorily corresponded to former K-values from different scales (local, regional, and national.

  5. Improving protein-protein interaction prediction using evolutionary information from low-quality MSAs.

    Science.gov (United States)

    Várnai, Csilla; Burkoff, Nikolas S; Wild, David L

    2017-01-01

    Evolutionary information stored in multiple sequence alignments (MSAs) has been used to identify the interaction interface of protein complexes, by measuring either co-conservation or co-mutation of amino acid residues across the interface. Recently, maximum entropy related correlated mutation measures (CMMs) such as direct information, decoupling direct from indirect interactions, have been developed to identify residue pairs interacting across the protein complex interface. These studies have focussed on carefully selected protein complexes with large, good-quality MSAs. In this work, we study protein complexes with a more typical MSA consisting of fewer than 400 sequences, using a set of 79 intramolecular protein complexes. Using a maximum entropy based CMM at the residue level, we develop an interface level CMM score to be used in re-ranking docking decoys. We demonstrate that our interface level CMM score compares favourably to the complementarity trace score, an evolutionary information-based score measuring co-conservation, when combined with the number of interface residues, a knowledge-based potential and the variability score of individual amino acid sites. We also demonstrate, that, since co-mutation and co-complementarity in the MSA contain orthogonal information, the best prediction performance using evolutionary information can be achieved by combining the co-mutation information of the CMM with co-conservation information of a complementarity trace score, predicting a near-native structure as the top prediction for 41% of the dataset. The method presented is not restricted to small MSAs, and will likely improve interface prediction also for complexes with large and good-quality MSAs.

  6. MEMS based shock pulse detection sensor for improved rotary Stirling cooler end of life prediction

    Science.gov (United States)

    Hübner, M.; Münzberg, M.

    2018-05-01

    The widespread use of rotary Stirling coolers in high performance thermal imagers used for critical 24/7 surveillance tasks justifies any effort to significantly enhance the reliability and predictable uptime of those coolers. Typically the lifetime of the whole imaging device is limited due to continuous wear and finally failure of the rotary compressor of the Stirling cooler, especially due to failure of the comprised bearings. MTTF based lifetime predictions, even based on refined MTTF models taking operational scenario dependent scaling factors into account, still lack in precision to forecast accurately the end of life (EOL) of individual coolers. Consequently preventive maintenance of individual coolers to avoid failures of the main sensor in critical operational scenarios are very costly or even useless. We have developed an integrated test method based on `Micro Electromechanical Systems', so called MEMS sensors, which significantly improves the cooler EOL prediction. The recently commercially available MEMS acceleration sensors have mechanical resonance frequencies up to 50 kHz. They are able to detect solid borne shock pulses in the cooler structure, originating from e.g. metal on metal impacts driven by periodical forces acting on moving inner parts of the rotary compressor within wear dependent slack and play. The impact driven transient shock pulse analyses uses only the high frequency signal <10kHz and differs therefore from the commonly used broadband low frequencies vibrational analysis of reciprocating machines. It offers a direct indicator of the individual state of wear. The predictive cooler lifetime model based on the shock pulse analysis is presented and results are discussed.

  7. Specific Components of Pediatricians' Medication-Related Care Predict Attention-Deficit/Hyperactivity Disorder Symptom Improvement.

    Science.gov (United States)

    Epstein, Jeffery N; Kelleher, Kelly J; Baum, Rebecca; Brinkman, William B; Peugh, James; Gardner, William; Lichtenstein, Phil; Langberg, Joshua M

    2017-06-01

    The development of attention-deficit/hyperactivity disorder (ADHD) care quality measurements is a prerequisite to improving the quality of community-based pediatric care of children with ADHD. Unfortunately, the evidence base for existing ADHD care quality metrics is poor. The objective of this study was to identify which components of ADHD care best predict patient outcomes. Parents of 372 medication-naïve children in grades 1 to 5 presenting to their community-based pediatrician (N = 195) for an ADHD-related concern and who were subsequently prescribed ADHD medication were identified. Parents completed the Vanderbilt ADHD Parent Rating Scale (VAPRS) at the time ADHD was raised as a concern and then approximately 12 months after starting ADHD medication. Each patient's chart was reviewed to measure 12 different components of ADHD care. Across all children, the mean decrease in VAPRS total symptom score during the first year of treatment was 11.6 (standard deviation 10.1). Of the 12 components of ADHD care, shorter times to first contact and more teacher ratings collected in the first year of treatment significantly predicted greater decreases in patient total symptom scores. Notably, it was timeliness of contacts, defined as office visits, phone calls, or email communication, that predicted more ADHD symptom decreases. Office visits alone, in terms of number or timeliness, did not predict patient outcomes. The magnitude of ADHD symptom decrease that can be achieved with the use of ADHD medications was associated with specific components of ADHD care. Future development and modifications of ADHD quality care metrics should include these ADHD care components. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  8. Changes in the Oswestry Disability Index that predict improvement after lumbar fusion.

    Science.gov (United States)

    Djurasovic, Mladen; Glassman, Steven D; Dimar, John R; Crawford, Charles H; Bratcher, Kelly R; Carreon, Leah Y

    2012-11-01

    Clinical studies use both disease-specific and generic health outcomes measures. Disease-specific measures focus on health domains most relevant to the clinical population, while generic measures assess overall health-related quality of life. There is little information about which domains of the Oswestry Disability Index (ODI) are most important in determining improvement in overall health-related quality of life, as measured by the 36-Item Short Form Health Survey (SF-36), after lumbar spinal fusion. The objective of the study is to determine which clinical elements assessed by the ODI most influence improvement of overall health-related quality of life. A single tertiary spine center database was used to identify patients undergoing lumbar fusion for standard degenerative indications. Patients with complete preoperative and 2-year outcomes measures were included. Pearson correlation was used to assess the relationship between improvement in each item of the ODI with improvement in the SF-36 physical component summary (PCS) score, as well as achievement of the SF-36 PCS minimum clinically important difference (MCID). Multivariate regression modeling was used to examine which items of the ODI best predicted achievement for the SF-36 PCS MCID. The effect size and standardized response mean were calculated for each of the items of the ODI. A total of 1104 patients met inclusion criteria (674 female and 430 male patients). The mean age at surgery was 57 years. All items of the ODI showed significant correlations with the change in SF-36 PCS score and achievement of MCID for the SF-36 PCS, but only pain intensity, walking, and social life had r values > 0.4 reflecting moderate correlation. These 3 variables were also the dimensions that were independent predictors of the SF-36 PCS, and they were the only dimensions that had effect sizes and standardized response means that were moderate to large. Of the health dimensions measured by the ODI, pain intensity, walking

  9. Linking precipitation, evapotranspiration and soil moisture content for the improvement of predictability over land

    Science.gov (United States)

    Catalano, Franco; Alessandri, Andrea; De Felice, Matteo

    2013-04-01

    Climate change scenarios are expected to show an intensification of the hydrological cycle together with modifications of evapotranspiration and soil moisture content. Evapotranspiration changes have been already evidenced for the end of the 20th century. The variance of evapotranspiration has been shown to be strongly related to the variance of precipitation over land. Nevertheless, the feedbacks between evapotranspiration, soil moisture and precipitation have not yet been completely understood at present-day. Furthermore, soil moisture reservoirs are associated to a memory and thus their proper initialization may have a strong influence on predictability. In particular, the linkage between precipitation and soil moisture is modulated by the effects on evapotranspiration. Therefore, the investigation of the coupling between these variables appear to be of primary importance for the improvement of predictability over the continents. The coupled manifold (CM) technique (Navarra and Tribbia 2005) is a method designed to separate the effects of the variability of two variables which are connected. This method has proved to be successful for the analysis of different climate fields, like precipitation, vegetation and sea surface temperature. In particular, the coupled variables reveal patterns that may be connected with specific phenomena, thus providing hints regarding potential predictability. In this study we applied the CM to recent observational datasets of precipitation (from CRU), evapotranspiration (from GIMMS and MODIS satellite-based estimates) and soil moisture content (from ESA) spanning a time period of 23 years (1984-2006) with a monthly frequency. Different data stratification (monthly, seasonal, summer JJA) have been employed to analyze the persistence of the patterns and their characteristical time scales and seasonality. The three variables considered show a significant coupling among each other. Interestingly, most of the signal of the

  10. Mutation of Gly195 of the ChlH subunit of Mg-chelatase reduces chlorophyll and further disrupts PS II assembly in a Ycf48-deficient strain of Synechocystis sp. PCC 6803

    Directory of Open Access Journals (Sweden)

    Tim Crawford

    2016-07-01

    Full Text Available Biogenesis of the photosystems in oxygenic phototrophs requires co-translational insertion of chlorophyll a. The first committed step of chlorophyll a biosynthesis is the insertion of a Mg2+ ion into the tetrapyrrole intermediate protoporphyrin IX, catalyzed by Mg-chelatase. We have identified a Synechocystis sp. PCC 6803 strain with a spontaneous mutation in chlH that results in a Gly195 to Glu substitution in a conserved region of the catalytic subunit of Mg-chelatase. Mutant strains containing the ChlH Gly195 to Glu mutation were generated using a two-step protocol that introduced the chlH gene into a putative neutral site in the chromosome prior to deletion of the native gene. The Gly195 to Glu mutation resulted in strains with decreased chlorophyll a. Deletion of the PS II assembly factor Ycf48 in a strain carrying the ChlH Gly195 to Glu mutation did not grow photoautotrophically. In addition, the ChlH-G195E:ΔYcf48 strain showed impaired PS II activity and decreased assembly of PS II centers in comparison to a ΔYcf48 strain. We suggest decreased chlorophyll in the ChlH-G195E mutant provides a background to screen for the role of assembly factors that are not essential under optimal growth conditions.

  11. Improving density functional tight binding predictions of free energy surfaces for peptide condensation reactions in solution

    Science.gov (United States)

    Kroonblawd, Matthew; Goldman, Nir

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for chemistry that is fast relative to DFT simulation times (Contract DE-AC52-07NA27344.

  12. Improving Density Functional Tight Binding Predictions of Free Energy Surfaces for Slow Chemical Reactions in Solution

    Science.gov (United States)

    Kroonblawd, Matthew; Goldman, Nir

    2017-06-01

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for reactions that are fast relative to DFT simulation times (Contract DE-AC52-07NA27344.

  13. Improving Sediment Transport Prediction by Assimilating Satellite Images in a Tidal Bay Model of Hong Kong

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2014-03-01

    Full Text Available Numerical models being one of the major tools for sediment dynamic studies in complex coastal waters are now benefitting from remote sensing images that are easily available for model inputs. The present study explored various methods of integrating remote sensing ocean color data into a numerical model to improve sediment transport prediction in a tide-dominated bay in Hong Kong, Deep Bay. Two sea surface sediment datasets delineated from satellite images from the Moderate Resolution Imaging Spectra-radiometer (MODIS were assimilated into a coastal ocean model of the bay for one tidal cycle. It was found that remote sensing sediment information enhanced the sediment transport model ability by validating the model results with in situ measurements. Model results showed that root mean square errors of forecast sediment both at the surface layer and the vertical layers from the model with satellite sediment assimilation are reduced by at least 36% over the model without assimilation.

  14. Transverse charge and magnetization densities: Improved chiral predictions down to b=1 fms

    Energy Technology Data Exchange (ETDEWEB)

    Alarcon, Jose Manuel [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Hiller Blin, Astrid N. [Johannes Gutenberg Univ., Mainz (Germany); Vicente Vacas, Manuel J. [Spanish National Research Council (CSIC), Valencia (Spain). Univ. of Valencia (UV), Inst. de Fisica Corpuscular; Weiss, Christian [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2018-03-01

    The transverse charge and magnetization densities provide insight into the nucleon’s inner structure. In the periphery, the isovector components are clearly dominant, and can be computed in a model-independent way by means of a combination of chiral effective field theory (cEFT) and dispersion analysis. With a novel N=D method, we incorporate the pion electromagnetic formfactor data into the cEFT calculation, thus taking into account the pion-rescattering effects and r-meson pole. As a consequence, we are able to reliably compute the densities down to distances b1 fm, therefore achieving a dramatic improvement of the results compared to traditional cEFT calculations, while remaining predictive and having controlled uncertainties.

  15. Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model

    International Nuclear Information System (INIS)

    Manderla, M; Koutnik, J; Kiniger, K

    2014-01-01

    Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance

  16. Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model

    Science.gov (United States)

    Manderla, M.; Kiniger, K.; Koutnik, J.

    2014-03-01

    Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance.

  17. A New Hybrid Method for Improving the Performance of Myocardial Infarction Prediction

    Directory of Open Access Journals (Sweden)

    Hojatollah Hamidi

    2016-06-01

    Full Text Available Abstract Introduction: Myocardial Infarction, also known as heart attack, normally occurs due to such causes as smoking, family history, diabetes, and so on. It is recognized as one of the leading causes of death in the world. Therefore, the present study aimed to evaluate the performance of classification models in order to predict Myocardial Infarction, using a feature selection method that includes Forward Selection and Genetic Algorithm. Materials & Methods: The Myocardial Infarction data set used in this study contains the information related to 519 visitors to Shahid Madani Specialized Hospital of Khorramabad, Iran. This data set includes 33 features. The proposed method includes a hybrid feature selection method in order to enhance the performance of classification algorithms. The first step of this method selects the features using Forward Selection. At the second step, the selected features were given to a genetic algorithm, in order to select the best features. Classification algorithms entail Ada Boost, Naïve Bayes, J48 decision tree and simpleCART are applied to the data set with selected features, for predicting Myocardial Infarction. Results: The best results have been achieved after applying the proposed feature selection method, which were obtained via simpleCART and J48 algorithms with the accuracies of 96.53% and 96.34%, respectively. Conclusion: Based on the results, the performances of classification algorithms are improved. So, applying the proposed feature selection method, along with classification algorithms seem to be considered as a confident method with respect to predicting the Myocardial Infarction.

  18. Mapping Soil Properties of Africa at 250 m Resolution: Random Forests Significantly Improve Current Predictions.

    Directory of Open Access Journals (Sweden)

    Tomislav Hengl

    Full Text Available 80% of arable land in Africa has low soil fertility and suffers from physical soil problems. Additionally, significant amounts of nutrients are lost every year due to unsustainable soil management practices. This is partially the result of insufficient use of soil management knowledge. To help bridge the soil information gap in Africa, the Africa Soil Information Service (AfSIS project was established in 2008. Over the period 2008-2014, the AfSIS project compiled two point data sets: the Africa Soil Profiles (legacy database and the AfSIS Sentinel Site database. These data sets contain over 28 thousand sampling locations and represent the most comprehensive soil sample data sets of the African continent to date. Utilizing these point data sets in combination with a large number of covariates, we have generated a series of spatial predictions of soil properties relevant to the agricultural management--organic carbon, pH, sand, silt and clay fractions, bulk density, cation-exchange capacity, total nitrogen, exchangeable acidity, Al content and exchangeable bases (Ca, K, Mg, Na. We specifically investigate differences between two predictive approaches: random forests and linear regression. Results of 5-fold cross-validation demonstrate that the random forests algorithm consistently outperforms the linear regression algorithm, with average decreases of 15-75% in Root Mean Squared Error (RMSE across soil properties and depths. Fitting and running random forests models takes an order of magnitude more time and the modelling success is sensitive to artifacts in the input data, but as long as quality-controlled point data are provided, an increase in soil mapping accuracy can be expected. Results also indicate that globally predicted soil classes (USDA Soil Taxonomy, especially Alfisols and Mollisols help improve continental scale soil property mapping, and are among the most important predictors. This indicates a promising potential for transferring

  19. Improved predictions of nuclear reaction rates for astrophysics applications with the TALYS reaction code

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J.

    2008-01-01

    Nuclear reaction rates for astrophysics applications are traditionally determined on the basis of Hauser-Feshbach reaction codes, like MOST. These codes use simplified schemes to calculate the capture reaction cross section on a given target nucleus, not only in its ground state but also on the different thermally populated states of the stellar plasma at a given temperature. Such schemes include a number of approximations that have never been tested, such as an approximate width fluctuation correction, the neglect of delayed particle emission during the electromagnetic decay cascade or the absence of the pre-equilibrium contribution at increasing incident energies. New developments have been brought to the reaction code TALYS to estimate the Maxwellian-averaged reaction rates of astrophysics relevance. These new developments give us the possibility to calculate with an improved accuracy the reaction cross sections and the corresponding astrophysics rates. The TALYS predictions for the thermonuclear rates of astrophysics relevance are presented and compared with those obtained with the MOST code on the basis of the same nuclear ingredients for nuclear structure properties, optical model potential, nuclear level densities and γ-ray strength. It is shown that, in particular, the pre-equilibrium process significantly influences the astrophysics rates of exotic neutron-rich nuclei. The reciprocity theorem traditionally used in astrophysics to determine photo-rates is also shown no to be valid for exotic nuclei. The predictions obtained with different nuclear inputs are also analyzed to provide an estimate of the theoretical uncertainties still affecting the reaction rate prediction far away from the experimentally known regions. (authors)

  20. Subject-specific knee joint geometry improves predictions of medial tibiofemoral contact forces

    Science.gov (United States)

    Gerus, Pauline; Sartori, Massimo; Besier, Thor F.; Fregly, Benjamin J.; Delp, Scott L.; Banks, Scott A.; Pandy, Marcus G.; D’Lima, Darryl D.; Lloyd, David G.

    2013-01-01

    Estimating tibiofemoral joint contact forces is important for understanding the initiation and progression of knee osteoarthritis. However, tibiofemoral contact force predictions are influenced by many factors including muscle forces and anatomical representations of the knee joint. This study aimed to investigate the influence of subject-specific geometry and knee joint kinematics on the prediction of tibiofemoral contact forces using a calibrated EMG-driven neuromusculoskeletal model of the knee. One participant fitted with an instrumented total knee replacement walked at a self-selected speed while medial and lateral tibiofemoral contact forces, ground reaction forces, whole-body kinematics, and lower-limb muscle activity were simultaneously measured. The combination of generic and subject-specific knee joint geometry and kinematics resulted in four different OpenSim models used to estimate muscle-tendon lengths and moment arms. The subject-specific geometric model was created from CT scans and the subject-specific knee joint kinematics representing the translation of the tibia relative to the femur was obtained from fluoroscopy. The EMG-driven model was calibrated using one walking trial, but with three different cost functions that tracked the knee flexion/extension moments with and without constraint over the estimated joint contact forces. The calibrated models then predicted the medial and lateral tibiofemoral contact forces for five other different walking trials. The use of subject-specific models with minimization of the peak tibiofemoral contact forces improved the accuracy of medial contact forces by 47% and lateral contact forces by 7%, respectively compared with the use of generic musculoskeletal model. PMID:24074941

  1. Comparison of measured and predicted thermal mixing tests using improved finite difference technique

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Rice, J.G.; Kim, J.H.

    1983-01-01

    The numerical diffusion introduced by the use of upwind formulations in the finite difference solution of the flow and energy equations for thermal mixing problems (cold water injection after small break LOCA in a PWR) was examined. The relative importance of numerical diffusion in the flow equations, compared to its effect on the energy equation was demonstrated. The flow field equations were solved using both first order accurate upwind, and second order accurate differencing schemes. The energy equation was treated using the conventional upwind and a mass weighted skew upwind scheme. Results presented for a simple test case showed that, for thermal mixing problems, the numerical diffusion was most significant in the energy equation. The numerical diffusion effect in the flow field equations was much less significant. A comparison of predictions using the skew upwind and the conventional upwind with experimental data from a two dimensional thermal mixing text are presented. The use of the skew upwind scheme showed a significant improvement in the accuracy of the steady state predicted temperatures. (orig./HP)

  2. A robust model predictive control strategy for improving the control performance of air-conditioning systems

    International Nuclear Information System (INIS)

    Huang Gongsheng; Wang Shengwei; Xu Xinhua

    2009-01-01

    This paper presents a robust model predictive control strategy for improving the supply air temperature control of air-handling units by dealing with the associated uncertainties and constraints directly. This strategy uses a first-order plus time-delay model with uncertain time-delay and system gain to describe air-conditioning process of an air-handling unit usually operating at various weather conditions. The uncertainties of the time-delay and system gain, which imply the nonlinearities and the variable dynamic characteristics, are formulated using an uncertainty polytope. Based on this uncertainty formulation, an offline LMI-based robust model predictive control algorithm is employed to design a robust controller for air-handling units which can guarantee a good robustness subject to uncertainties and constraints. The proposed robust strategy is evaluated in a dynamic simulation environment of a variable air volume air-conditioning system in various operation conditions by comparing with a conventional PI control strategy. The robustness analysis of both strategies under different weather conditions is also presented.

  3. Improving ELM-Based Service Quality Prediction by Concise Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yuhai Zhao

    2015-01-01

    Full Text Available Web services often run on highly dynamic and changing environments, which generate huge volumes of data. Thus, it is impractical to monitor the change of every QoS parameter for the timely trigger precaution due to high computational costs associated with the process. To address the problem, this paper proposes an active service quality prediction method based on extreme learning machine. First, we extract web service trace logs and QoS information from the service log and convert them into feature vectors. Second, by the proposed EC rules, we are enabled to trigger the precaution of QoS as soon as possible with high confidence. An efficient prefix tree based mining algorithm together with some effective pruning rules is developed to mine such rules. Finally, we study how to extract a set of diversified features as the representative of all mined results. The problem is proved to be NP-hard. A greedy algorithm is presented to approximate the optimal solution. Experimental results show that ELM trained by the selected feature subsets can efficiently improve the reliability and the earliness of service quality prediction.

  4. Predicting Urban Medical Services Demand in China: An Improved Grey Markov Chain Model by Taylor Approximation.

    Science.gov (United States)

    Duan, Jinli; Jiao, Feng; Zhang, Qishan; Lin, Zhibin

    2017-08-06

    The sharp increase of the aging population has raised the pressure on the current limited medical resources in China. To better allocate resources, a more accurate prediction on medical service demand is very urgently needed. This study aims to improve the prediction on medical services demand in China. To achieve this aim, the study combines Taylor Approximation into the Grey Markov Chain model, and develops a new model named Taylor-Markov Chain GM (1,1) (T-MCGM (1,1)). The new model has been tested by adopting the historical data, which includes the medical service on treatment of diabetes, heart disease, and cerebrovascular disease from 1997 to 2015 in China. The model provides a predication on medical service demand of these three types of disease up to 2022. The results reveal an enormous growth of urban medical service demand in the future. The findings provide practical implications for the Health Administrative Department to allocate medical resources, and help hospitals to manage investments on medical facilities.

  5. Analysis and Prediction on Vehicle Ownership Based on an Improved Stochastic Gompertz Diffusion Process

    Directory of Open Access Journals (Sweden)

    Huapu Lu

    2017-01-01

    Full Text Available This paper aims at introducing a new improved stochastic differential equation related to Gompertz curve for the projection of vehicle ownership growth. This diffusion model explains the relationship between vehicle ownership and GDP per capita, which has been studied as a Gompertz-like function before. The main innovations of the process lie in two parts: by modifying the deterministic part of the original Gompertz equation, the model can present the remaining slow increase when the S-shaped curve has reached its saturation level; by introducing the stochastic differential equation, the model can better fit the real data when there are fluctuations. Such comparisons are carried out based on data from US, UK, Japan, and Korea with a time span of 1960–2008. It turns out that the new process behaves better in fitting curves and predicting short term growth. Finally, a prediction of Chinese vehicle ownership up to 2025 is presented with the new model, as China is on the initial stage of motorization with much fluctuations in growth.

  6. On the importance of paleoclimate modelling for improving predictions of future climate change

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2009-12-01

    Full Text Available We use an ensemble of runs from the MIROC3.2 AGCM with slab-ocean to explore the extent to which mid-Holocene simulations are relevant to predictions of future climate change. The results are compared with similar analyses for the Last Glacial Maximum (LGM and pre-industrial control climate. We suggest that the paleoclimate epochs can provide some independent validation of the models that is also relevant for future predictions. Considering the paleoclimate epochs, we find that the stronger global forcing and hence larger climate change at the LGM makes this likely to be the more powerful one for estimating the large-scale changes that are anticipated due to anthropogenic forcing. The phenomena in the mid-Holocene simulations which are most strongly correlated with future changes (i.e., the mid to high northern latitude land temperature and monsoon precipitation do, however, coincide with areas where the LGM results are not correlated with future changes, and these are also areas where the paleodata indicate significant climate changes have occurred. Thus, these regions and phenomena for the mid-Holocene may be useful for model improvement and validation.

  7. Improved methods for predicting peptide binding affinity to MHC class II molecules.

    Science.gov (United States)

    Jensen, Kamilla Kjaergaard; Andreatta, Massimo; Marcatili, Paolo; Buus, Søren; Greenbaum, Jason A; Yan, Zhen; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2018-01-06

    Major histocompatibility complex class II (MHC-II) molecules are expressed on the surface of professional antigen-presenting cells where they display peptides to T helper cells, which orchestrate the onset and outcome of many host immune responses. Understanding which peptides will be presented by the MHC-II molecule is therefore important for understanding the activation of T helper cells and can be used to identify T-cell epitopes. We here present updated versions of two MHC-II-peptide binding affinity prediction methods, NetMHCII and NetMHCIIpan. These were constructed using an extended data set of quantitative MHC-peptide binding affinity data obtained from the Immune Epitope Database covering HLA-DR, HLA-DQ, HLA-DP and H-2 mouse molecules. We show that training with this extended data set improved the performance for peptide binding predictions for both methods. Both methods are publicly available at www.cbs.dtu.dk/services/NetMHCII-2.3 and www.cbs.dtu.dk/services/NetMHCIIpan-3.2. © 2018 John Wiley & Sons Ltd.

  8. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  9. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  10. Beyond clay: Towards an improved set of variables for predicting soil organic matter content

    Science.gov (United States)

    Rasmussen, Craig; Heckman, Katherine; Wieder, William R.; Keiluweit, Marco; Lawrence, Corey R.; Berhe, Asmeret Asefaw; Blankinship, Joseph C.; Crow, Susan E.; Druhan, Jennifer; Hicks Pries, Caitlin E.; Marin-Spiotta, Erika; Plante, Alain F.; Schadel, Christina; Schmiel, Joshua P.; Sierra, Carlos A.; Thompson, Aaron; Wagai, Rota

    2018-01-01

    Improved quantification of the factors controlling soil organic matter (SOM) stabilization at continental to global scales is needed to inform projections of the largest actively cycling terrestrial carbon pool on Earth, and its response to environmental change. Biogeochemical models rely almost exclusively on clay content to modify rates of SOM turnover and fluxes of climate-active CO2 to the atmosphere. Emerging conceptual understanding, however, suggests other soil physicochemical properties may predict SOM stabilization better than clay content. We addressed this discrepancy by synthesizing data from over 5,500 soil profiles spanning continental scale environmental gradients. Here, we demonstrate that other physicochemical parameters are much stronger predictors of SOM content, with clay content having relatively little explanatory power. We show that exchangeable calcium strongly predicted SOM content in water-limited, alkaline soils, whereas with increasing moisture availability and acidity, iron- and aluminum-oxyhydroxides emerged as better predictors, demonstrating that the relative importance of SOM stabilization mechanisms scales with climate and acidity. These results highlight the urgent need to modify biogeochemical models to better reflect the role of soil physicochemical properties in SOM cycling.

  11. Can video games be used to predict or improve laparoscopic skills?

    Science.gov (United States)

    Rosenberg, Bradley H; Landsittel, Douglas; Averch, Timothy D

    2005-04-01

    Performance of laparoscopic surgery requires adequate hand-eye coordination. Video games are an effective way to judge one's hand-eye coordination, and practicing these games may improve one's skills. Our goal was to see if there is a correlation between skill in video games and skill in laparoscopy. Also, we hoped to demonstrate that practicing video games can improve one's laparoscopic skills. Eleven medical students (nine male, two female) volunteered to participate. On day 1, each student played three commercially available video games (Top Spin, XSN Sports; Project Gotham Racing 2, Bizarre Creations; and Amped 2, XSN Sports) for 30 minutes on an X-box (Microsoft, Seattle, WA) and was judged both objectively and subjectively. Next, the students performed four laparoscopic tasks (object transfer, tracing a figure-of-eight, suture placement, and knot-tying) in a swine model and were assessed for time to complete the task, number of errors committed, and hand-eye coordination. The students were then randomized to control (group A) or "training" (i.e., video game practicing; group B) arms. Two weeks later, all students repeated the laparoscopic skills laboratory and were reassessed. Spearman correlation coefficients demonstrated a significant relation between many of the parameters, particularly time to complete each task and hand-eye coordination at the different games. There was a weaker association between video game performance and both laparoscopic errors committed and hand-eye coordination. Group B subjects did not improve significantly over those in group A in any measure (P >0.05 for all). Video game aptitude appears to predict the level of laparoscopic skill in the novice surgeon. In this study, practicing video games did not improve one's laparoscopic skill significantly, but a larger study with more practice time could prove games to be helpful.

  12. Prognostic durability of liver fibrosis tests and improvement in predictive performance for mortality by combining tests.

    Science.gov (United States)

    Bertrais, Sandrine; Boursier, Jérôme; Ducancelle, Alexandra; Oberti, Frédéric; Fouchard-Hubert, Isabelle; Moal, Valérie; Calès, Paul

    2017-06-01

    There is currently no recommended time interval between noninvasive fibrosis measurements for monitoring chronic liver diseases. We determined how long a single liver fibrosis evaluation may accurately predict mortality, and assessed whether combining tests improves prognostic performance. We included 1559 patients with chronic liver disease and available baseline liver stiffness measurement (LSM) by Fibroscan, aspartate aminotransferase to platelet ratio index (APRI), FIB-4, Hepascore, and FibroMeter V2G . Median follow-up was 2.8 years during which 262 (16.8%) patients died, with 115 liver-related deaths. All fibrosis tests were able to predict mortality, although APRI (and FIB-4 for liver-related mortality) showed lower overall discriminative ability than the other tests (differences in Harrell's C-index: P fibrosis, 1 year in patients with significant fibrosis, and liver disease (MELD) score testing sets. In the training set, blood tests and LSM were independent predictors of all-cause mortality. The best-fit multivariate model included age, sex, LSM, and FibroMeter V2G with C-index = 0.834 (95% confidence interval, 0.803-0.862). The prognostic model for liver-related mortality included the same covariates with C-index = 0.868 (0.831-0.902). In the testing set, the multivariate models had higher prognostic accuracy than FibroMeter V2G or LSM alone for all-cause mortality and FibroMeter V2G alone for liver-related mortality. The prognostic durability of a single baseline fibrosis evaluation depends on the liver fibrosis level. Combining LSM with a blood fibrosis test improves mortality risk assessment. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  13. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    Science.gov (United States)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  14. Improve accuracy and sensibility in glycan structure prediction by matching glycan isotope abundance

    International Nuclear Information System (INIS)

    Xu Guang; Liu Xin; Liu Qingyan; Zhou Yanhong; Li Jianjun

    2012-01-01

    Highlights: ► A glycan isotope pattern recognition strategy for glycomics. ► A new data preprocessing procedure to detect ion peaks in a giving MS spectrum. ► A linear soft margin SVM classification for isotope pattern recognition. - Abstract: Mass Spectrometry (MS) is a powerful technique for the determination of glycan structures and is capable of providing qualitative and quantitative information. Recent development in computational method offers an opportunity to use glycan structure databases and de novo algorithms for extracting valuable information from MS or MS/MS data. However, detecting low-intensity peaks that are buried in noisy data sets is still a challenge and an algorithm for accurate prediction and annotation of glycan structures from MS data is highly desirable. The present study describes a novel algorithm for glycan structure prediction by matching glycan isotope abundance (mGIA), which takes isotope masses, abundances, and spacing into account. We constructed a comprehensive database containing 808 glycan compositions and their corresponding isotope abundance. Unlike most previously reported methods, not only did we take into count the m/z values of the peaks but also their corresponding logarithmic Euclidean distance of the calculated and detected isotope vectors. Evaluation against a linear classifier, obtained by training mGIA algorithm with datasets of three different human tissue samples from Consortium for Functional Glycomics (CFG) in association with Support Vector Machine (SVM), was proposed to improve the accuracy of automatic glycan structure annotation. In addition, an effective data preprocessing procedure, including baseline subtraction, smoothing, peak centroiding and composition matching for extracting correct isotope profiles from MS data was incorporated. The algorithm was validated by analyzing the mouse kidney MS data from CFG, resulting in the identification of 6 more glycan compositions than the previous annotation

  15. Plaque Structural Stress Estimations Improve Prediction of Future Major Adverse Cardiovascular Events After Intracoronary Imaging.

    Science.gov (United States)

    Brown, Adam J; Teng, Zhongzhao; Calvert, Patrick A; Rajani, Nikil K; Hennessy, Orla; Nerlekar, Nitesh; Obaid, Daniel R; Costopoulos, Charis; Huang, Yuan; Hoole, Stephen P; Goddard, Martin; West, Nick E J; Gillard, Jonathan H; Bennett, Martin R

    2016-06-01

    Although plaque rupture is responsible for most myocardial infarctions, few high-risk plaques identified by intracoronary imaging actually result in future major adverse cardiovascular events (MACE). Nonimaging markers of individual plaque behavior are therefore required. Rupture occurs when plaque structural stress (PSS) exceeds material strength. We therefore assessed whether PSS could predict future MACE in high-risk nonculprit lesions identified on virtual-histology intravascular ultrasound. Baseline nonculprit lesion features associated with MACE during long-term follow-up (median: 1115 days) were determined in 170 patients undergoing 3-vessel virtual-histology intravascular ultrasound. MACE was associated with plaque burden ≥70% (hazard ratio: 8.6; 95% confidence interval, 2.5-30.6; P<0.001) and minimal luminal area ≤4 mm(2) (hazard ratio: 6.6; 95% confidence interval, 2.1-20.1; P=0.036), although absolute event rates for high-risk lesions remained <10%. PSS derived from virtual-histology intravascular ultrasound was subsequently estimated in nonculprit lesions responsible for MACE (n=22) versus matched control lesions (n=22). PSS showed marked heterogeneity across and between similar lesions but was significantly increased in MACE lesions at high-risk regions, including plaque burden ≥70% (13.9±11.5 versus 10.2±4.7; P<0.001) and thin-cap fibroatheroma (14.0±8.9 versus 11.6±4.5; P=0.02). Furthermore, PSS improved the ability of virtual-histology intravascular ultrasound to predict MACE in plaques with plaque burden ≥70% (adjusted log-rank, P=0.003) and minimal luminal area ≤4 mm(2) (P=0.002). Plaques responsible for MACE had larger superficial calcium inclusions, which acted to increase PSS (P<0.05). Baseline PSS is increased in plaques responsible for MACE and improves the ability of intracoronary imaging to predict events. Biomechanical modeling may complement plaque imaging for risk stratification of coronary nonculprit lesions. © 2016

  16. Can survival prediction be improved by merging gene expression data sets?

    Directory of Open Access Journals (Sweden)

    Haleh Yasrebi

    Full Text Available BACKGROUND: High-throughput gene expression profiling technologies generating a wealth of data, are increasingly used for characterization of tumor biopsies for clinical trials. By applying machine learning algorithms to such clinically documented data sets, one hopes to improve tumor diagnosis, prognosis, as well as prediction of treatment response. However, the limited number of patients enrolled in a single trial study limits the power of machine learning approaches due to over-fitting. One could partially overcome this limitation by merging data from different studies. Nevertheless, such data sets differ from each other with regard to technical biases, patient selection criteria and follow-up treatment. It is therefore not clear at all whether the advantage of increased sample size outweighs the disadvantage of higher heterogeneity of merged data sets. Here, we present a systematic study to answer this question specifically for breast cancer data sets. We use survival prediction based on Cox regression as an assay to measure the added value of merged data sets. RESULTS: Using time-dependent Receiver Operating Characteristic-Area Under the Curve (ROC-AUC and hazard ratio as performance measures, we see in overall no significant improvement or deterioration of survival prediction with merged data sets as compared to individual data sets. This apparently was due to the fact that a few genes with strong prognostic power were not available on all microarray platforms and thus were not retained in the merged data sets. Surprisingly, we found that the overall best performance was achieved with a single-gene predictor consisting of CYB5D1. CONCLUSIONS: Merging did not deteriorate performance on average despite (a The diversity of microarray platforms used. (b The heterogeneity of patients cohorts. (c The heterogeneity of breast cancer disease. (d Substantial variation of time to death or relapse. (e The reduced number of genes in the merged data

  17. Improving Flood Prediction By the Assimilation of Satellite Soil Moisture in Poorly Monitored Catchments.

    Science.gov (United States)

    Alvarez-Garreton, C. D.; Ryu, D.; Western, A. W.; Crow, W. T.; Su, C. H.; Robertson, D. E.

    2014-12-01

    Flood prediction in poorly monitored catchments is among the greatest challenges faced by hydrologists. To address this challenge, an increasing number of studies in the last decade have explored methods to integrate various existing observations from ground and satellites. One approach in particular, is the assimilation of satellite soil moisture (SM-DA) into rainfall-runoff models. The rationale is that satellite soil moisture (SSM) can be used to correct model soil water states, enabling more accurate prediction of catchment response to precipitation and thus better streamflow. However, there is still no consensus on the most effective SM-DA scheme and how this might depend on catchment scale, climate characteristics, runoff mechanisms, model and SSM products used, etc. In this work, an operational SM-DA scheme was set up in the poorly monitored, large (>40,000 km2), semi-arid Warrego catchment situated in eastern Australia. We assimilated passive and active SSM products into the probability distributed model (PDM) using an ensemble Kalman filter. We explored factors influencing the SM-DA framework, including relatively new techniques to remove model-observation bias, estimate observation errors and represent model errors. Furthermore, we explored the advantages of accounting for the spatial distribution of forcing and channel routing processes within the catchment by implementing and comparing lumped and semi-distributed model setups. Flood prediction is improved by SM-DA (Figure), with a 30% reduction of the average root-mean-squared difference of the ensemble prediction, a 20% reduction of the false alarm ratio and a 40% increase of the ensemble mean Nash-Sutcliffe efficiency. SM-DA skill does not significantly change with different observation error assumptions, but the skill strongly depends on the observational bias correction technique used, and more importantly, on the performance of the open-loop model before assimilation. Our findings imply that proper

  18. HEPS4Power - Extended-range Hydrometeorological Ensemble Predictions for Improved Hydropower Operations and Revenues

    Science.gov (United States)

    Bogner, Konrad; Monhart, Samuel; Liniger, Mark; Spririg, Christoph; Jordan, Fred; Zappa, Massimiliano

    2015-04-01

    In recent years large progresses have been achieved in the operational prediction of floods and hydrological drought with up to ten days lead time. Both the public and the private sectors are currently using probabilistic runoff forecast in order to monitoring water resources and take actions when critical conditions are to be expected. The use of extended-range predictions with lead times exceeding 10 days is not yet established. The hydropower sector in particular might have large benefits from using hydro meteorological forecasts for the next 15 to 60 days in order to optimize the operations and the revenues from their watersheds, dams, captions, turbines and pumps. The new Swiss Competence Centers in Energy Research (SCCER) targets at boosting research related to energy issues in Switzerland. The objective of HEPS4POWER is to demonstrate that operational extended-range hydro meteorological forecasts have the potential to become very valuable tools for fine tuning the production of energy from hydropower systems. The project team covers a specific system-oriented value chain starting from the collection and forecast of meteorological data (MeteoSwiss), leading to the operational application of state-of-the-art hydrological models (WSL) and terminating with the experience in data presentation and power production forecasts for end-users (e-dric.ch). The first task of the HEPS4POWER will be the downscaling and post-processing of ensemble extended-range meteorological forecasts (EPS). The goal is to provide well-tailored forecasts of probabilistic nature that should be reliable in statistical and localized at catchment or even station level. The hydrology related task will consist in feeding the post-processed meteorological forecasts into a HEPS using a multi-model approach by implementing models with different complexity. Also in the case of the hydrological ensemble predictions, post-processing techniques need to be tested in order to improve the quality of the

  19. Regression Trees Identify Relevant Interactions: Can This Improve the Predictive Performance of Risk Adjustment?

    Science.gov (United States)

    Buchner, Florian; Wasem, Jürgen; Schillo, Sonja

    2017-01-01

    Risk equalization formulas have been refined since their introduction about two decades ago. Because of the complexity and the abundance of possible interactions between the variables used, hardly any interactions are considered. A regression tree is used to systematically search for interactions, a methodologically new approach in risk equalization. Analyses are based on a data set of nearly 2.9 million individuals from a major German social health insurer. A two-step approach is applied: In the first step a regression tree is built on the basis of the learning data set. Terminal nodes characterized by more than one morbidity-group-split represent interaction effects of different morbidity groups. In the second step the 'traditional' weighted least squares regression equation is expanded by adding interaction terms for all interactions detected by the tree, and regression coefficients are recalculated. The resulting risk adjustment formula shows an improvement in the adjusted R 2 from 25.43% to 25.81% on the evaluation data set. Predictive ratios are calculated for subgroups affected by the interactions. The R 2 improvement detected is only marginal. According to the sample level performance measures used, not involving a considerable number of morbidity interactions forms no relevant loss in accuracy. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Concurrent Modeling of Hydrodynamics and Interaction Forces Improves Particle Deposition Predictions.

    Science.gov (United States)

    Jin, Chao; Ren, Carolyn L; Emelko, Monica B

    2016-04-19

    It is widely believed that media surface roughness enhances particle deposition-numerous, but inconsistent, examples of this effect have been reported. Here, a new mathematical framework describing the effects of hydrodynamics and interaction forces on particle deposition on rough spherical collectors in absence of an energy barrier was developed and validated. In addition to quantifying DLVO force, the model includes improved descriptions of flow field profiles and hydrodynamic retardation functions. This work demonstrates that hydrodynamic effects can significantly alter particle deposition relative to expectations when only the DLVO force is considered. Moreover, the combined effects of hydrodynamics and interaction forces on particle deposition on rough, spherical media are not additive, but synergistic. Notably, the developed model's particle deposition predictions are in closer agreement with experimental observations than those from current models, demonstrating the importance of inclusion of roughness impacts in particle deposition description/simulation. Consideration of hydrodynamic contributions to particle deposition may help to explain discrepancies between model-based expectations and experimental outcomes and improve descriptions of particle deposition during physicochemical filtration in systems with nonsmooth collector surfaces.

  1. Improvements to executive function during exercise training predict maintenance of physical activity over the following year

    Directory of Open Access Journals (Sweden)

    John eBest

    2014-05-01

    Full Text Available Previous studies have shown that exercise training benefits cognitive, neural, and physical health markers in older adults. It is likely that these positive effects will diminish if participants return to sedentary lifestyles following training cessation. Theory posits that that the neurocognitive processes underlying self-regulation, namely executive function (EF, are important to maintaining positive health behaviors. Therefore, we examined whether better EF performance in older women would predict greater adherence to routine physical activity (PA over 1 year following a 12-month resistance exercise training randomized controlled trial. The study sample consisted of 125 community-dwelling women aged 65 to 75 years old. Our primary outcome measure was self-reported PA, as measured by the Physical Activity Scale for the Elderly (PASE, assessed on a monthly basis from month 13 to month 25. Executive function was assessed using the Stroop Test at baseline (month 0 and post-training (month 12. Latent growth curve analyses showed that, on average, PA decreased during the follow-up period but at a decelerating rate. Women who made greater improvements to EF during the training period showed better adherence to PA during the 1-year follow-up period (β = -.36, p .10. Overall, these findings suggest that improving EF plays an important role in whether older women maintain higher levels of PA following exercise training and that this association is only apparent after training when environmental support for PA is low.

  2. Improved Storm Monitoring and Prediction for the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chandrasekar, V.; Anderson, M.; Davis, G.

    2017-12-01

    The Advanced Quantitative Precipitation Information (AQPI) System is a multi-faceted project to improve precipitation and hydrologic monitoring, prediction, and decision support for the San Francisco Bay Area. The Bay Area faces a multitude of threats from extreme events, including disrupted transportation from flooded roads and railroad lines, water management challenges related to storm water, river and reservoir management and storm-related damage demanding emergency response. The threats occur on spatial scales ranging from local communities to the entire region and time scales ranging from hours to days. These challenges will be exacerbated by future sea level rise, more extreme weather events and increased vulnerabilities. AQPI is a collaboration of federal, state and local governments with assistance from the research community. Led by NOAA's Earth System Research Laboratory, in partnership with the Cooperative Institute for Research in the Atmosphere, USGS, and Scripps, AQPI is a four-year effort funded in part by a grant from the California Department of Water Resource's Integrated Regional Water Management Program. The Sonoma County Water Agency is serving as the local sponsor of the project. Other local participants include the Santa Clara Valley Water District, San Francisco Public Utilities Commission, and the Bay Area Flood Protection Agencies Association. AQPI will provide both improved observing capabilities and a suite of numerical forecast models to produce accurate and timely information for benefit of flood management, emergency response, water quality, ecosystem services, water supply and transportation management for the Bay Area. The resulting information will support decision making to mitigate flood risks, secure water supplies, minimize water quality impacts to the Bay from combined sewer overflows, and have improved lead-time on coastal and Bay inundation from extreme storms like Atmospheric Rivers (ARs). The project is expected to

  3. Improving protein fold recognition by extracting fold-specific features from predicted residue-residue contacts.

    Science.gov (United States)

    Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo

    2017-12-01

    Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent

  4. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    Science.gov (United States)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  5. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  6. Analysis of longitudinal variations in North Pacific alkalinity to improve predictive algorithms

    Science.gov (United States)

    Fry, Claudia H.; Tyrrell, Toby; Achterberg, Eric P.

    2016-10-01

    The causes of natural variation in alkalinity in the North Pacific surface ocean need to be investigated to understand the carbon cycle and to improve predictive algorithms. We used GLODAPv2 to test hypotheses on the causes of three longitudinal phenomena in Alk*, a tracer of calcium carbonate cycling. These phenomena are (a) an increase from east to west between 45°N and 55°N, (b) an increase from west to east between 25°N and 40°N, and (c) a minor increase from west to east in the equatorial upwelling region. Between 45°N and 55°N, Alk* is higher on the western than on the eastern side, and this is associated with denser isopycnals with higher Alk* lying at shallower depths. Between 25°N and 40°N, upwelling along the North American continental shelf causes higher Alk* in the east. Along the equator, a strong east-west trend was not observed, even though the upwelling on the eastern side of the basin is more intense, because the water brought to the surface is not high in Alk*. We created two algorithms to predict alkalinity, one for the entire Pacific Ocean north of 30°S and one for the eastern margin. The Pacific Ocean algorithm is more accurate than the commonly used algorithm published by Lee et al. (2006), of similar accuracy to the best previously published algorithm by Sasse et al. (2013), and is less biased with longitude than other algorithms in the subpolar North Pacific. Our eastern margin algorithm is more accurate than previously published algorithms.

  7. Genetic Gain Increases by Applying the Usefulness Criterion with Improved Variance Prediction in Selection of Crosses.

    Science.gov (United States)

    Lehermeier, Christina; Teyssèdre, Simon; Schön, Chris-Carolin

    2017-12-01

    A crucial step in plant breeding is the selection and combination of parents to form new crosses. Genome-based prediction guides the selection of high-performing parental lines in many crop breeding programs which ensures a high mean performance of progeny. To warrant maximum selection progress, a new cross should also provide a large progeny variance. The usefulness concept as measure of the gain that can be obtained from a specific cross accounts for variation in progeny variance. Here, it is shown that genetic gain can be considerably increased when crosses are selected based on their genomic usefulness criterion compared to selection based on mean genomic estimated breeding values. An efficient and improved method to predict the genetic variance of a cross based on Markov chain Monte Carlo samples of marker effects from a whole-genome regression model is suggested. In simulations representing selection procedures in crop breeding programs, the performance of this novel approach is compared with existing methods, like selection based on mean genomic estimated breeding values and optimal haploid values. In all cases, higher genetic gain was obtained compared with previously suggested methods. When 1% of progenies per cross were selected, the genetic gain based on the estimated usefulness criterion increased by 0.14 genetic standard deviation compared to a selection based on mean genomic estimated breeding values. Analytical derivations of the progeny genotypic variance-covariance matrix based on parental genotypes and genetic map information make simulations of progeny dispensable, and allow fast implementation in large-scale breeding programs. Copyright © 2017 by the Genetics Society of America.

  8. Prefrontal Cortex Structure Predicts Training-Induced Improvements in Multitasking Performance.

    Science.gov (United States)

    Verghese, Ashika; Garner, K G; Mattingley, Jason B; Dux, Paul E

    2016-03-02

    The ability to perform multiple, concurrent tasks efficiently is a much-desired cognitive skill, but one that remains elusive due to the brain's inherent information-processing limitations. Multitasking performance can, however, be greatly improved through cognitive training (Van Selst et al., 1999, Dux et al., 2009). Previous studies have examined how patterns of brain activity change following training (for review, see Kelly and Garavan, 2005). Here, in a large-scale human behavioral and imaging study of 100 healthy adults, we tested whether multitasking training benefits, assessed using a standard dual-task paradigm, are associated with variability in brain structure. We found that the volume of the rostral part of the left dorsolateral prefrontal cortex (DLPFC) predicted an individual's response to training. Critically, this association was observed exclusively in a task-specific training group, and not in an active-training control group. Our findings reveal a link between DLPFC structure and an individual's propensity to gain from training on a task that taps the limits of cognitive control. Cognitive "brain" training is a rapidly growing, multibillion dollar industry (Hayden, 2012) that has been touted as the panacea for a variety of disorders that result in cognitive decline. A key process targeted by such training is "cognitive control." Here, we combined an established cognitive control measure, multitasking ability, with structural brain imaging in a sample of 100 participants. Our goal was to determine whether individual differences in brain structure predict the extent to which people derive measurable benefits from a cognitive training regime. Ours is the first study to identify a structural brain marker-volume of left hemisphere dorsolateral prefrontal cortex-associated with the magnitude of multitasking performance benefits induced by training at an individual level. Copyright © 2016 the authors 0270-6474/16/362638-08$15.00/0.

  9. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    Science.gov (United States)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  10. Collaborative Research: Improving Decadal Prediction of Arctic Climate Variability and Change Using a Regional Arctic

    Energy Technology Data Exchange (ETDEWEB)

    Gutowski, William J. [Iowa State Univ., Ames, IA (United States)

    2017-12-28

    This project developed and applied a regional Arctic System model for enhanced decadal predictions. It built on successful research by four of the current PIs with support from the DOE Climate Change Prediction Program, which has resulted in the development of a fully coupled Regional Arctic Climate Model (RACM) consisting of atmosphere, land-hydrology, ocean and sea ice components. An expanded RACM, a Regional Arctic System Model (RASM), has been set up to include ice sheets, ice caps, mountain glaciers, and dynamic vegetation to allow investigation of coupled physical processes responsible for decadal-scale climate change and variability in the Arctic. RASM can have high spatial resolution (~4-20 times higher than currently practical in global models) to advance modeling of critical processes and determine the need for their explicit representation in Global Earth System Models (GESMs). The pan-Arctic region is a key indicator of the state of global climate through polar amplification. However, a system-level understanding of critical arctic processes and feedbacks needs further development. Rapid climate change has occurred in a number of Arctic System components during the past few decades, including retreat of the perennial sea ice cover, increased surface melting of the Greenland ice sheet, acceleration and thinning of outlet glaciers, reduced snow cover, thawing permafrost, and shifts in vegetation. Such changes could have significant ramifications for global sea level, the ocean thermohaline circulation and heat budget, ecosystems, native communities, natural resource exploration, and commercial transportation. The overarching goal of the RASM project has been to advance understanding of past and present states of arctic climate and to improve seasonal to decadal predictions. To do this the project has focused on variability and long-term change of energy and freshwater flows through the arctic climate system. The three foci of this research are: - Changes

  11. Improved prediction of residue flexibility by embedding optimized amino acid grouping into RSA-based linear models.

    Science.gov (United States)

    Zhang, Hua; Kurgan, Lukasz

    2014-12-01

    Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.

  12. Improving short-term air quality predictions over the U.S. using chemical data assimilation

    Science.gov (United States)

    Kumar, R.; Delle Monache, L.; Alessandrini, S.; Saide, P.; Lin, H. C.; Liu, Z.; Pfister, G.; Edwards, D. P.; Baker, B.; Tang, Y.; Lee, P.; Djalalova, I.; Wilczak, J. M.

    2017-12-01

    State and local air quality forecasters across the United States use air quality forecasts from the National Air Quality Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) as one of the key tools to protect the public from adverse air pollution related health effects by dispensing timely information about air pollution episodes. This project funded by the National Aeronautics and Space Administration (NASA) aims to enhance the decision-making process by improving the accuracy of NAQFC short-term predictions of ground-level particulate matter of less than 2.5 µm in diameter (PM2.5) by exploiting NASA Earth Science Data with chemical data assimilation. The NAQFC is based on the Community Multiscale Air Quality (CMAQ) model. To improve the initialization of PM2.5 in CMAQ, we developed a new capability in the community Gridpoint Statistical Interpolation (GSI) system to assimilate Terra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD) retrievals in CMAQ. Specifically, we developed new capabilities within GSI to read/write CMAQ data, a forward operator that calculates AOD at 550 nm from CMAQ aerosol chemical composition and an adjoint of the forward operator that translates the changes in AOD to aerosol chemical composition. A generalized background error covariance program called "GEN_BE" has been extended to calculate background error covariance using CMAQ output. The background error variances are generated using a combination of both emissions and meteorological perturbations to better capture sources of uncertainties in PM2.5 simulations. The newly developed CMAQ-GSI system is used to perform daily 24-h PM2.5 forecasts with and without data assimilation from 15 July to 14 August 2014, and the resulting forecasts are compared against AirNOW PM2.5 measurements at 550 stations across the U. S. We find that the assimilation of MODIS AOD retrievals improves initialization of the CMAQ model

  13. CT angiography and CT perfusion improve prediction of infarct volume in patients with anterior circulation stroke

    Energy Technology Data Exchange (ETDEWEB)

    Seeters, Tom van; Schaaf, Irene C. van der; Dankbaar, Jan Willem; Horsch, Alexander D.; Niesten, Joris M.; Luitse, Merel J.A.; Mali, Willem P.T.M.; Velthuis, Birgitta K. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Biessels, Geert Jan; Kappelle, L.J. [University Medical Center Utrecht, Department of Neurology, Brain Center Rudolf Magnus, Utrecht (Netherlands); Majoie, Charles B.L.M. [Academic Medical Center, Department of Radiology, Amsterdam (Netherlands); Vos, Jan Albert [St. Antonius Hospital, Department of Radiology, Nieuwegein (Netherlands); Schonewille, Wouter J. [St. Antonius Hospital, Department of Neurology, Nieuwegein (Netherlands); Walderveen, Marianne A.A. van [Leiden University Medical Center, Department of Radiology, Leiden (Netherlands); Wermer, Marieke J.H. [Leiden University Medical Center, Department of Neurology, Leiden (Netherlands); Duijm, Lucien E.M. [Catharina Hospital, Department of Radiology, Eindhoven (Netherlands); Keizer, Koos [Catharina Hospital, Department of Neurology, Eindhoven (Netherlands); Bot, Joseph C.J. [VU University Medical Center, Department of Radiology, Amsterdam (Netherlands); Visser, Marieke C. [VU University Medical Center, Department of Neurology, Amsterdam (Netherlands); Lugt, Aad van der [Erasmus MC University Medical Center, Department of Radiology, Rotterdam (Netherlands); Dippel, Diederik W.J. [Erasmus MC University Medical Center, Department of Neurology, Rotterdam (Netherlands); Kesselring, F.O.H.W. [Rijnstate Hospital, Department of Radiology, Arnhem (Netherlands); Hofmeijer, Jeannette [Rijnstate Hospital, Department of Neurology, Arnhem (Netherlands); Lycklama a Nijeholt, Geert J. [Medical Center Haaglanden, Department of Radiology, The Hague (Netherlands); Boiten, Jelis [Medical Center Haaglanden, Department of Neurology, The Hague (Netherlands); Rooij, Willem Jan van [St. Elisabeth Hospital, Department of Radiology, Tilburg (Netherlands); Kort, Paul L.M. de [St. Elisabeth Hospital, Department of Neurology, Tilburg (Netherlands); Roos, Yvo B.W.E.M. [Academic Medical Center, Department of Neurology, Amsterdam (Netherlands); Meijer, Frederick J.A. [Radboud University Medical Center, Department of Radiology, Nijmegen (Netherlands); Pleiter, C.C. [St. Franciscus Hospital, Department of Radiology, Rotterdam (Netherlands); Graaf, Yolanda van der [University Medical Center Utrecht, Julius Center for Health Sciences and Primary Care, Utrecht (Netherlands); Collaboration: Dutch acute stroke study (DUST) investigators

    2016-04-15

    We investigated whether baseline CT angiography (CTA) and CT perfusion (CTP) in acute ischemic stroke could improve prediction of infarct presence and infarct volume on follow-up imaging. We analyzed 906 patients with suspected anterior circulation stroke from the prospective multicenter Dutch acute stroke study (DUST). All patients underwent baseline non-contrast CT, CTA, and CTP and follow-up non-contrast CT/MRI after 3 days. Multivariable regression models were developed including patient characteristics and non-contrast CT, and subsequently, CTA and CTP measures were added. The increase in area under the curve (AUC) and R{sup 2} was assessed to determine the additional value of CTA and CTP. At follow-up, 612 patients (67.5 %) had a detectable infarct on CT/MRI; median infarct volume was 14.8 mL (interquartile range (IQR) 2.8-69.6). Regarding infarct presence, the AUC of 0.82 (95 % confidence interval (CI) 0.79-0.85) for patient characteristics and non-contrast CT was improved with addition of CTA measures (AUC 0.85 (95 % CI 0.82-0.87); p < 0.001) and was even higher after addition of CTP measures (AUC 0.89 (95 % CI 0.87-0.91); p < 0.001) and combined CTA/CTP measures (AUC 0.89 (95 % CI 0.87-0.91); p < 0.001). For infarct volume, adding combined CTA/CTP measures (R{sup 2} = 0.58) was superior to patient characteristics and non-contrast CT alone (R{sup 2} = 0.44) and to addition of CTA alone (R{sup 2} = 0.55) or CTP alone (R{sup 2} = 0.54; all p < 0.001). In the acute stage, CTA and CTP have additional value over patient characteristics and non-contrast CT for predicting infarct presence and infarct volume on follow-up imaging. These findings could be applied for patient selection in future trials on ischemic stroke treatment. (orig.)

  14. Parameter definition using vibration prediction software leads to significant drilling performance improvements

    Energy Technology Data Exchange (ETDEWEB)

    Amorim, Dalmo; Hanley, Chris Hanley; Fonseca, Isaac; Santos, Juliana [National Oilwell Varco, Houston TX (United States); Leite, Daltro J.; Borella, Augusto; Gozzi, Danilo [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    field monitoring. Vibration prediction diminishes the importance of trial-and-error procedures such as drill-off tests, which are valid only for short sections. It also solves an existing lapse in Mechanical Specific Energy (MSE) real-time drilling control programs applying the theory of Teale, which states that a drilling system is perfectly efficient when it spends the exact energy to overcome the in situ rock strength. Using the proprietary software tool this paper will examine the resonant vibration modes that may be initiated while drilling with different BHA's and drill string designs, showing that the combination of a proper BHA design along with the correct selection of input parameters results in an overall improvement to drilling efficiency. Also, being the BHA predictively analyzed, it will be reduced the potential for vibration or stress fatigue in the drill string components, leading to a safer operation. In the recent years there has been an increased focus on vibration detection, analysis, and mitigation techniques, where new technologies, like the Drilling Dynamics Data Recorders (DDDR), may provide the capability to capture high frequency dynamics data at multiple points along the drilling system. These tools allow the achievement of drilling performance improvements not possible before, opening a whole new array of opportunities for optimization and for verification of predictions calculated by the drill string dynamics modeling software tool. The results of this study will identify how the dynamics from the drilling system, interacting with formation, directly relate to inefficiencies and to the possible solutions to mitigate drilling vibrations in order to improve drilling performance. Software vibration prediction and downhole measurements can be used for non-drilling operations like drilling out casing or reaming, where extremely high vibration levels - devastating to the cutting structure of the bit before it has even touched bottom - have

  15. Improvement of Surface Temperature Prediction Using SVR with MOGREPS Data for Short and Medium range over South Korea

    Science.gov (United States)

    Lim, S. J.; Choi, R. K.; Ahn, K. D.; Ha, J. C.; Cho, C. H.

    2014-12-01

    As the Korea Meteorology Administration (KMA) has operated Met Office Global and Regional Ensemble Prediction System (MOGREPS) with introduction of Unified Model (UM), many attempts have been made to improve predictability in temperature forecast in last years. In this study, post-processing method of MOGREPS for surface temperature prediction is developed with machine learning over 52 locations in South Korea. Past 60-day lag time was used as a training phase of Support Vector Regression (SVR) method for surface temperature forecast model. The selected inputs for SVR are followings: date and surface temperatures from Numerical Weather prediction (NWP), such as GDAPS, individual 24 ensemble members, mean and median of ensemble members for every 3hours for 12 days.To verify the reliability of SVR-based ensemble prediction (SVR-EP), 93 days are used (from March 1 to May 31, 2014). The result yielded improvement of SVR-EP by RMSE value of 16 % throughout entire prediction period against conventional ensemble prediction (EP). In particular, short range predictability of SVR-EP resulted in 18.7% better RMSE for 1~3 day forecast. The mean temperature bias between SVR-EP and EP at all test locations showed around 0.36°C and 1.36°C, respectively. SVR-EP is currently extending for more vigorous sensitivity test, such as increasing training phase and optimizing machine learning model.

  16. Using Terrain Analysis and Remote Sensing to Improve Snow Mass Balance and Runoff Prediction

    Science.gov (United States)

    Venteris, E. R.; Coleman, A. M.; Wigmosta, M. S.

    2010-12-01

    Approximately 70-80% of the water in the international Columbia River basin is sourced from snowmelt. The demand for this water has competing needs, as it is used for agricultural irrigation, municipal, hydro and nuclear power generation, and environmental in-stream flow requirements. Accurate forecasting of water supply is essential for planning current needs and prediction of future demands due to growth and climate change. A significant limitation on current forecasting is spatial and temporal uncertainty in snowpack characteristics, particularly snow water equivalent. Currently, point measurements of snow mass balance are provided by the NRCS SNOTEL network. Each site consists of a snow mass sensor and meteorology station that monitors snow water equivalent, snow depth, precipitation, and temperature. There are currently 152 sites in the mountains of Oregon and Washington. An important step in improving forecasts is determining how representative each SNOTEL site is of the total mass balance of the watershed through a full accounting of the spatiotemporal variability in snowpack processes. This variation is driven by the interaction between meteorological processes, land cover, and landform. Statistical and geostatistical spatial models relate the state of the snowpack (characterized through SNOTEL, snow course measurements, and multispectral remote sensing) to terrain attributes derived from digital elevation models (elevation, aspect, slope, compound topographic index, topographic shading, etc.) and land cover. Time steps representing the progression of the snow season for several meteorologically distinct water years are investigated to identify and quantify dominant physical processes. The spatially distributed snow balance data can be used directly as model inputs to improve short- and long-range hydrologic forecasts.

  17. Accounting for Landscape Heterogeneity Improves Spatial Predictions of Tree Vulnerability to Drought

    Science.gov (United States)

    Schwantes, A. M.; Parolari, A.; Swenson, J. J.; Johnson, D. M.; Domec, J. C.; Jackson, R. B.; Pelak, N. F., III; Porporato, A. M.

    2017-12-01

    Globally, as climate change continues, forest vulnerability to droughts and heatwaves is increasing, but vulnerability differs regionally and locally depending on landscape position. However, most models used in forecasting forest responses to heatwaves and droughts do not incorporate relevant spatial processes. To improve predictions of spatial tree vulnerability, we employed a non-linear stochastic model of soil moisture dynamics across a landscape, accounting for spatial differences in aspect, topography, and soils. Our unique approach integrated plant hydraulics and landscape processes, incorporating effects from lateral redistribution of water using a topographic index and radiation and temperature differences attributable to aspect. Across a watershed in central Texas we modeled dynamic water stress for a dominant tree species, Juniperus ashei. We compared our results to a detailed spatial dataset of drought-impacted areas (>25% canopy loss) derived from remote sensing during the severe 2011 drought. We then projected future dynamic water stress through the 21st century using climate projections from 10 global climate models under two scenarios, and compared models with and without landscape heterogeneity. Within this watershed, 42% of J. ashei dominated systems were impacted by the 2011 drought. Modeled dynamic water stress tracked these spatial patterns of observed drought-impacted areas. Total accuracy increased from 59%, when accounting only for soil variability, to 73% when including lateral redistribution of water and radiation and temperature effects. Dynamic water stress was projected to increase through the 21st century, with only minimal buffering from the landscape. During the hotter and more severe droughts projected in the 21st century, up to 90% of the watershed crossed a dynamic water stress threshold associated with canopy loss in 2011. Favorable microsites may exist across a landscape where trees can persist; however, if future droughts are

  18. Incorporating Single-nucleotide Polymorphisms Into the Lyman Model to Improve Prediction of Radiation Pneumonitis

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, Susan L., E-mail: sltucker@mdanderson.org [Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li Minghuan [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Xu Ting; Gomez, Daniel [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yuan Xianglin [Department of Oncology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan (China); Yu Jinming [Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, Shandong (China); Liu Zhensheng; Yin Ming; Guan Xiaoxiang; Wang Lie; Wei Qingyi [Department of Epidemiology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mohan, Radhe [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Vinogradskiy, Yevgeniy [University of Colorado School of Medicine, Aurora, Colorado (United States); Martel, Mary [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao Zhongxing [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2013-01-01

    Purpose: To determine whether single-nucleotide polymorphisms (SNPs) in genes associated with DNA repair, cell cycle, transforming growth factor-{beta}, tumor necrosis factor and receptor, folic acid metabolism, and angiogenesis can significantly improve the fit of the Lyman-Kutcher-Burman (LKB) normal-tissue complication probability (NTCP) model of radiation pneumonitis (RP) risk among patients with non-small cell lung cancer (NSCLC). Methods and Materials: Sixteen SNPs from 10 different genes (XRCC1, XRCC3, APEX1, MDM2, TGF{beta}, TNF{alpha}, TNFR, MTHFR, MTRR, and VEGF) were genotyped in 141 NSCLC patients treated with definitive radiation therapy, with or without chemotherapy. The LKB model was used to estimate the risk of severe (grade {>=}3) RP as a function of mean lung dose (MLD), with SNPs and patient smoking status incorporated into the model as dose-modifying factors. Multivariate analyses were performed by adding significant factors to the MLD model in a forward stepwise procedure, with significance assessed using the likelihood-ratio test. Bootstrap analyses were used to assess the reproducibility of results under variations in the data. Results: Five SNPs were selected for inclusion in the multivariate NTCP model based on MLD alone. SNPs associated with an increased risk of severe RP were in genes for TGF{beta}, VEGF, TNF{alpha}, XRCC1 and APEX1. With smoking status included in the multivariate model, the SNPs significantly associated with increased risk of RP were in genes for TGF{beta}, VEGF, and XRCC3. Bootstrap analyses selected a median of 4 SNPs per model fit, with the 6 genes listed above selected most often. Conclusions: This study provides evidence that SNPs can significantly improve the predictive ability of the Lyman MLD model. With a small number of SNPs, it was possible to distinguish cohorts with >50% risk vs <10% risk of RP when they were exposed to high MLDs.

  19. Improving prediction of ischemic cardiovascular disease in the general population using apolipoprotein B

    DEFF Research Database (Denmark)

    Benn, Marianne; Nordestgaard, Børge G; Jensen, Gorm Boje

    2007-01-01

    Apolipoprotein B (apoB) levels predict fatal myocardial infarction. Whether apoB also predicts nonfatal ischemic cardiovascular events is unclear. We tested the following hypotheses: apoB predicts ischemic cardiovascular events, and apoB is a better predictor of ischemic cardiovascular events tha...

  20. Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.

    Science.gov (United States)

    Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark

    2018-01-01

    Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.

  1. Importance weighting of local flux measurements to improve reactivity predictions in nuclear systems

    Energy Technology Data Exchange (ETDEWEB)

    Dulla, Sandra; Hoh, Siew Sin; Nervo, Marta; Ravetto, Piero [Politecnico di Torino, Dipt. Energia (Italy)

    2015-07-15

    The reactivity monitoring is a key aspect for the safe operation of nuclear reactors, especially for subcritical source-driven systems. Various methods are available for both, off-line and on-line reactivity determination from direct measurements carried out on the reactor. Usually the methods are based on the inverse point kinetic model applied to signals from neutron detectors and results may be severely affected by space and spectral effects. Such effects need to be compensated and correction procedures have to be applied. In this work, a new approach is proposed, by using the full information from different local measurements to generate a global signal through a proper weighting of the signals provided by single neutron detectors. A weighting techique based on the use of the adjoint flux proves to be efficient in improving the prediction capability of inverse techniques. The idea is applied to the recently developed algorithm, named MAρTA, that can be used in both off-line and online modes.

  2. Characterizing haploinsufficiency of SHELL gene to improve fruit form prediction in introgressive hybrids of oil palm.

    Science.gov (United States)

    Teh, Chee-Keng; Muaz, Siti Dalila; Tangaya, Praveena; Fong, Po-Yee; Ong, Ai-Ling; Mayes, Sean; Chew, Fook-Tim; Kulaveerasingam, Harikrishna; Appleton, David

    2017-06-08

    The fundamental trait in selective breeding of oil palm (Eleais guineensis Jacq.) is the shell thickness surrounding the kernel. The monogenic shell thickness is inversely correlated to mesocarp thickness, where the crude palm oil accumulates. Commercial thin-shelled tenera derived from thick-shelled dura × shell-less pisifera generally contain 30% higher oil per bunch. Two mutations, sh MPOB (M1) and sh AVROS (M2) in the SHELL gene - a type II MADS-box transcription factor mainly present in AVROS and Nigerian origins, were reported to be responsible for different fruit forms. In this study, we have tested 1,339 samples maintained in Sime Darby Plantation using both mutations. Five genotype-phenotype discrepancies and eight controls were then re-tested with all five reported mutations (sh AVROS , sh MPOB , sh MPOB2 , sh MPOB3 and sh MPOB4 ) within the same gene. The integration of genotypic data, pedigree records and shell formation model further explained the haploinsufficiency effect on the SHELL gene with different number of functional copies. Some rare mutations were also identified, suggesting a need to further confirm the existence of cis-compound mutations in the gene. With this, the prediction accuracy of fruit forms can be further improved, especially in introgressive hybrids of oil palm. Understanding causative variant segregation is extremely important, even for monogenic traits such as shell thickness in oil palm.

  3. On the improvement of neural cryptography using erroneous transmitted information with error prediction.

    Science.gov (United States)

    Allam, Ahmed M; Abbas, Hazem M

    2010-12-01

    Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.

  4. Prediction of Effective Drug Combinations by an Improved Naïve Bayesian Algorithm.

    Science.gov (United States)

    Bai, Li-Yue; Dai, Hao; Xu, Qin; Junaid, Muhammad; Peng, Shao-Liang; Zhu, Xiaolei; Xiong, Yi; Wei, Dong-Qing

    2018-02-05

    Drug combinatorial therapy is a promising strategy for combating complex diseases due to its fewer side effects, lower toxicity and better efficacy. However, it is not feasible to determine all the effective drug combinations in the vast space of possible combinations given the increasing number of approved drugs in the market, since the experimental methods for identification of effective drug combinations are both labor- and time-consuming. In this study, we conducted systematic analysis of various types of features to characterize pairs of drugs. These features included information about the targets of the drugs, the pathway in which the target protein of a drug was involved in, side effects of drugs, metabolic enzymes of the drugs, and drug transporters. The latter two features (metabolic enzymes and drug transporters) were related to the metabolism and transportation properties of drugs, which were not analyzed or used in previous studies. Then, we devised a novel improved naïve Bayesian algorithm to construct classification models to predict effective drug combinations by using the individual types of features mentioned above. Our results indicated that the performance of our proposed method was indeed better than the naïve Bayesian algorithm and other conventional classification algorithms such as support vector machine and K-nearest neighbor.

  5. Prediction of Effective Drug Combinations by an Improved Naïve Bayesian Algorithm

    Directory of Open Access Journals (Sweden)

    Li-Yue Bai

    2018-02-01

    Full Text Available Drug combinatorial therapy is a promising strategy for combating complex diseases due to its fewer side effects, lower toxicity and better efficacy. However, it is not feasible to determine all the effective drug combinations in the vast space of possible combinations given the increasing number of approved drugs in the market, since the experimental methods for identification of effective drug combinations are both labor- and time-consuming. In this study, we conducted systematic analysis of various types of features to characterize pairs of drugs. These features included information about the targets of the drugs, the pathway in which the target protein of a drug was involved in, side effects of drugs, metabolic enzymes of the drugs, and drug transporters. The latter two features (metabolic enzymes and drug transporters were related to the metabolism and transportation properties of drugs, which were not analyzed or used in previous studies. Then, we devised a novel improved naïve Bayesian algorithm to construct classification models to predict effective drug combinations by using the individual types of features mentioned above. Our results indicated that the performance of our proposed method was indeed better than the naïve Bayesian algorithm and other conventional classification algorithms such as support vector machine and K-nearest neighbor.

  6. Immunophenotyping does not improve predictivity of the local lymph node assay in mice.

    Science.gov (United States)

    Strauss, Volker; Kolle, Susanne N; Honarvar, Naveed; Dammann, Martina; Groeters, Sibylle; Faulhammer, Frank; Landsiedel, Robert; van Ravenzwaay, Bennard

    2015-04-01

    The local lymph node assay (LLNA) is a regulatory accepted test for the identification of skin sensitizing substances by measuring radioactive thymidine incorporation into the lymph node. However, there is evidence that LLNA is overestimating the sensitization potential of certain substance classes in particular those exerting skin irritation. Some reports describe the additional use of flow cytometry-based immunophenotyping to better discriminate irritants from sensitizing irritants in LLNA. In the present study, the 22 performance standards plus 8 surfactants were assessed using the radioactive LLNA method. In addition, lymph node cells were immunophenotyped to evaluate the specificity of the lymph node response using cell surface markers such as B220 or CD19, CD3, CD4, CD8, I-A(κ) and CD69 with the aim to allow a better discrimination above all between irritants and sensitizers, but also non-irritating sensitizers and non-sensitizers. However, the markers assessed in this study do not sufficiently differentiate between irritants and irritant sensitizers and therefore did not improve the predictive capacity of the LLNA. Copyright © 2014 John Wiley & Sons, Ltd.

  7. A predictive control framework for torque-based steering assistance to improve safety in highway driving

    Science.gov (United States)

    Ercan, Ziya; Carvalho, Ashwin; Tseng, H. Eric; Gökaşan, Metin; Borrelli, Francesco

    2018-05-01

    Haptic shared control framework opens up new perspectives on the design and implementation of the driver steering assistance systems which provide torque feedback to the driver in order to improve safety. While designing such a system, it is important to account for the human-machine interactions since the driver feels the feedback torque through the hand wheel. The controller should consider the driver's impact on the steering dynamics to achieve a better performance in terms of driver's acceptance and comfort. In this paper we present a predictive control framework which uses a model of driver-in-the-loop steering dynamics to optimise the torque intervention with respect to the driver's neuromuscular response. We first validate the system in simulations to compare the performance of the controller in nominal and model mismatch cases. Then we implement the controller in a test vehicle and perform experiments with a human driver. The results show the effectiveness of the proposed system in avoiding hazardous situations under different driver behaviours.

  8. Toward Structure Prediction for Short Peptides Using the Improved SAAP Force Field Parameters

    Directory of Open Access Journals (Sweden)

    Kenichi Dedachi

    2013-01-01

    Full Text Available Based on the observation that Ramachandran-type potential energy surfaces of single amino acid units in water are in good agreement with statistical structures of the corresponding amino acid residues in proteins, we recently developed a new all-atom force field called SAAP, in which the total energy function for a polypeptide is expressed basically as a sum of single amino acid potentials and electrostatic and Lennard-Jones potentials between the amino acid units. In this study, the SAAP force field (SAAPFF parameters were improved, and classical canonical Monte Carlo (MC simulation was carried out for short peptide models, that is, Met-enkephalin and chignolin, at 300 K in an implicit water model. Diverse structures were reasonably obtained for Met-enkephalin, while three folded structures, one of which corresponds to a native-like structure with three native hydrogen bonds, were obtained for chignolin. The results suggested that the SAAP-MC method is useful for conformational sampling for the short peptides. A protocol of SAAP-MC simulation followed by structural clustering and examination of the obtained structures by ab initio calculation or simply by the number of the hydrogen bonds (or the hardness was demonstrated to be an effective strategy toward structure prediction for short peptide molecules.

  9. Use of in situ volumetric water content at field capacity to improve prediction of soil water retention properties

    OpenAIRE

    Al Majou , Hassan; Bruand , Ary; Duval , Odile

    2008-01-01

    International audience; Use of in situ volumetric water content at field capacity to improve prediction of soil water retention properties. Most pedotransfer functions (PTFs) developed over the last three decades to generate water retention characteristics use soil texture, bulk density and organic carbon content as predictors. Despite of the high number of PTFs published, most being class- or continuous-PTFs, accuracy of prediction remains limited. In this study, we compared the performance ...

  10. Improving a two-equation eddy-viscosity turbulence model to predict the aerodynamic performance of thick wind turbine airfoils

    Science.gov (United States)

    Bangga, Galih; Kusumadewi, Tri; Hutomo, Go; Sabila, Ahmad; Syawitri, Taurista; Setiadi, Herlambang; Faisal, Muhamad; Wiranegara, Raditya; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus

    2018-03-01

    Numerical simulations for relatively thick airfoils are carried out in the present studies. An attempt to improve the accuracy of the numerical predictions is done by adjusting the turbulent viscosity of the eddy-viscosity Menter Shear-Stress-Transport (SST) model. The modification involves the addition of a damping factor on the wall-bounded flows incorporating the ratio of the turbulent kinetic energy to its specific dissipation rate for separation detection. The results are compared with available experimental data and CFD simulations using the original Menter SST model. The present model improves the lift polar prediction even though the stall angle is still overestimated. The improvement is caused by the better prediction of separated flow under a strong adverse pressure gradient. The results show that the Reynolds stresses are damped near the wall causing variation of the logarithmic velocity profiles.

  11. Improving a prediction system for oil spills in the Yellow Sea: effect of tides on subtidal flow.

    Science.gov (United States)

    Kim, Chang-Sin; Cho, Yang-Ki; Choi, Byoung-Ju; Jung, Kyung Tae; You, Sung Hyup

    2013-03-15

    A multi-nested prediction system for the Yellow Sea using drifter trajectory simulations was developed to predict the movements of an oil spill after the MV Hebei Spirit accident. The speeds of the oil spill trajectories predicted by the model without tidal forcing were substantially faster than the observations; however, predictions taking into account the tides, including both tidal cycle and subtidal periods, were satisfactorily improved. Subtidal flow in the simulation without tides was stronger than in that with tides because of reduced frictional effects. Friction induced by tidal stress decelerated the southward subtidal flows driven by northwesterly winter winds along the Korean coast of the Yellow Sea. These results strongly suggest that in order to produce accurate predictions of oil spill trajectories, simulations must include tidal effects, such as variations within a tidal cycle and advections over longer time scales in tide-dominated areas. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Improving Wind Farm Dispatchability Using Model Predictive Control for Optimal Operation of Grid-Scale Energy Storage

    Directory of Open Access Journals (Sweden)

    Douglas Halamay

    2014-09-01

    Full Text Available This paper demonstrates the use of model-based predictive control for energy storage systems to improve the dispatchability of wind power plants. Large-scale wind penetration increases the variability of power flow on the grid, thus increasing reserve requirements. Large energy storage systems collocated with wind farms can improve dispatchability of the wind plant by storing energy during generation over-the-schedule and sourcing energy during generation under-the-schedule, essentially providing on-site reserves. Model predictive control (MPC provides a natural framework for this application. By utilizing an accurate energy storage system model, control actions can be planned in the context of system power and state-of-charge limitations. MPC also enables the inclusion of predicted wind farm performance over a near-term horizon that allows control actions to be planned in anticipation of fast changes, such as wind ramps. This paper demonstrates that model-based predictive control can improve system performance compared with a standard non-predictive, non-model-based control approach. It is also demonstrated that secondary objectives, such as reducing the rate of change of the wind plant output (i.e., ramps, can be considered and successfully implemented within the MPC framework. Specifically, it is shown that scheduling error can be reduced by 81%, reserve requirements can be improved by up to 37%, and the number of ramp events can be reduced by 74%.

  13. Multitrait, Random Regression, or Simple Repeatability Model in High-Throughput Phenotyping Data Improve Genomic Prediction for Wheat Grain Yield.

    Science.gov (United States)

    Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E

    2017-07-01

    High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.

  14. Prediction model of ammonium uranyl carbonate calcination by microwave heating using incremental improved Back-Propagation neural network

    Energy Technology Data Exchange (ETDEWEB)

    Li Yingwei [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Peng Jinhui, E-mail: jhpeng@kmust.edu.c [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Liu Bingguo [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Li Wei [Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Huang Daifu [No. 272 Nuclear Industry Factory, China National Nuclear Corporation, Hengyang, Hunan Province 421002 (China); Zhang Libo [Faculty of Metallurgical and Energy Engineering, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China); Key Laboratory of Unconventional Metallurgy, Ministry of Education, Kunming University of Science and Technology, Kunming, Yunnan Province 650093 (China)

    2011-05-15

    Research highlights: The incremental improved Back-Propagation neural network prediction model using the Levenberg-Marquardt algorithm based on optimizing theory is put forward. The prediction model of the nonlinear system is built, which can effectively predict the experiment of microwave calcining of ammonium uranyl carbonate (AUC). AUC can accept the microwave energy and microwave heating can quickly decompose AUC. In the experiment of microwave calcining of AUC, the contents of U and U{sup 4+} increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth. - Abstract: The incremental improved Back-Propagation (BP) neural network prediction model was put forward, which was very useful in overcoming the problems, such as long testing cycle, high testing quantity, difficulty of optimization for process parameters, many training data probably were offered by the way of increment batch and the limitation of the system memory could make the training data infeasible, which existed in the process of calcinations for ammonium uranyl carbonate (AUC) by microwave heating. The prediction model of the nonlinear system was built, which could effectively predict the experiment of microwave calcining of AUC. The predicted results indicated that the contents of U and U{sup 4+} were increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth.

  15. Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.

    Directory of Open Access Journals (Sweden)

    E Penelope Holland

    Full Text Available Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.

  16. Prediction model of ammonium uranyl carbonate calcination by microwave heating using incremental improved Back-Propagation neural network

    International Nuclear Information System (INIS)

    Li Yingwei; Peng Jinhui; Liu Bingguo; Li Wei; Huang Daifu; Zhang Libo

    2011-01-01

    Research highlights: → The incremental improved Back-Propagation neural network prediction model using the Levenberg-Marquardt algorithm based on optimizing theory is put forward. → The prediction model of the nonlinear system is built, which can effectively predict the experiment of microwave calcining of ammonium uranyl carbonate (AUC). → AUC can accept the microwave energy and microwave heating can quickly decompose AUC. → In the experiment of microwave calcining of AUC, the contents of U and U 4+ increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth. - Abstract: The incremental improved Back-Propagation (BP) neural network prediction model was put forward, which was very useful in overcoming the problems, such as long testing cycle, high testing quantity, difficulty of optimization for process parameters, many training data probably were offered by the way of increment batch and the limitation of the system memory could make the training data infeasible, which existed in the process of calcinations for ammonium uranyl carbonate (AUC) by microwave heating. The prediction model of the nonlinear system was built, which could effectively predict the experiment of microwave calcining of AUC. The predicted results indicated that the contents of U and U 4+ were increased with increasing of microwave power and irradiation time, and decreased with increasing of the material average depth.

  17. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    with variable weights in three templates. The significantly improved prediction accuracy of ASSR achieved by MSAD strongly supports the linear superposition mechanism of ASSR if an accurate template of transient AEPs can be reconstructed. The capacity in obtaining both ASSR and its underlying transient components accurately and simultaneously has the potential to contribute significantly to diagnosis of patients with neuropsychiatric disorders.

  18. Pulmonary edema predictive scoring index (PEPSI), a new index to predict risk of reperfusion pulmonary edema and improvement of hemodynamics in percutaneous transluminal pulmonary angioplasty.

    Science.gov (United States)

    Inami, Takumi; Kataoka, Masaharu; Shimura, Nobuhiko; Ishiguro, Haruhisa; Yanagisawa, Ryoji; Taguchi, Hiroki; Fukuda, Keiichi; Yoshino, Hideaki; Satoh, Toru

    2013-07-01

    This study sought to identify useful predictors for hemodynamic improvement and risk of reperfusion pulmonary edema (RPE), a major complication of this procedure. Percutaneous transluminal pulmonary angioplasty (PTPA) has been reported to be effective for the treatment of chronic thromboembolic pulmonary hypertension (CTEPH). PTPA has not been widespread because RPE has not been well predicted. We included 140 consecutive procedures in 54 patients with CTEPH. The flow appearance of the target vessels was graded into 4 groups (Pulmonary Flow Grade), and we proposed PEPSI (Pulmonary Edema Predictive Scoring Index) = (sum total change of Pulmonary Flow Grade scores) × (baseline pulmonary vascular resistance). Correlations between occurrence of RPE and 11 variables, including hemodynamic parameters, number of target vessels, and PEPSI, were analyzed. Hemodynamic parameters significantly improved after median observation period of 6.4 months, and the sum total changes in Pulmonary Flow Grade scores were significantly correlated with the improvement in hemodynamics. Multivariate analysis revealed that PEPSI was the strongest factor correlated with the occurrence of RPE (p PEPSI to be a useful marker of the risk of RPE (cutoff value 35.4, negative predictive value 92.3%). Pulmonary Flow Grade score is useful in determining therapeutic efficacy, and PEPSI is highly supportive to reduce the risk of RPE after PTPA. Using these 2 indexes, PTPA could become a safe and common therapeutic strategy for CTEPH. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  19. Does early change predict long-term (6 months) improvements in subjects who receive manual therapy for low back pain?

    Science.gov (United States)

    Cook, Chad; Petersen, Shannon; Donaldson, Megan; Wilhelm, Mark; Learman, Ken

    2017-09-01

    Early change is commonly assessed for manual therapy interventions and has been used to determine treatment appropriateness. However, current studies have only explored the relationship of between or within-session changes and short-/medium-term outcomes. The goal of this study was to determine whether pain changes after two weeks of pragmatic manual therapy could predict those participants with chronic low back pain who demonstrate continued improvements at 6-month follow-up. This study was a retrospective observational design. Univariate logistic regression analyses were performed using a 33% and a 50% pain change to predict improvement. Those who experienced a ≥33% pain reduction by 2 weeks had 6.98 (95% CI = 1.29, 37.53) times higher odds of 50% improvement on the GRoC and 4.74 (95% CI = 1.31, 17.17) times higher odds of 50% improvement on the ODI (at 6 months). Subjects who reported a ≥50% pain reduction at 2 weeks had 5.98 (95% CI = 1.56, 22.88) times higher odds of a 50% improvement in the GRoC and 3.99 (95% CI = 1.23, 12.88) times higher odds of a 50% improvement in the ODI (at 6 months). Future studies may investigate whether a change in plan of care is beneficial for patients who are not showing early improvement predictive of a good long-term outcome.

  20. Predictive Factors for Subjective Improvement in Lumbar Spinal Stenosis Patients with Nonsurgical Treatment: A 3-Year Prospective Cohort Study.

    Directory of Open Access Journals (Sweden)

    Ko Matsudaira

    Full Text Available To assess the predictive factors for subjective improvement with nonsurgical treatment in consecutive patients with lumbar spinal stenosis (LSS.Patients with LSS were enrolled from 17 medical centres in Japan. We followed up 274 patients (151 men; mean age, 71 ± 7.4 years for 3 years. A multivariable logistic regression model was used to assess the predictive factors for subjective symptom improvement with nonsurgical treatment.In 30% of patients, conservative treatment led to a subjective improvement in the symptoms; in 70% of patients, the symptoms remained unchanged, worsened, or required surgical treatment. The multivariable analysis of predictive factors for subjective improvement with nonsurgical treatment showed that the absence of cauda equina symptoms (only radicular symptoms had an odds ratio (OR of 3.31 (95% confidence interval [CI]: 1.50-7.31; absence of degenerative spondylolisthesis/scoliosis had an OR of 2.53 (95% CI: 1.13-5.65; <1-year duration of illness had an OR of 3.81 (95% CI: 1.46-9.98; and hypertension had an OR of 2.09 (95% CI: 0.92-4.78.The predictive factors for subjective symptom improvement with nonsurgical treatment in LSS patients were the presence of only radicular symptoms, absence of degenerative spondylolisthesis/scoliosis, and an illness duration of <1 year.

  1. Research on Demand Prediction of Fresh Food Supply Chain Based on Improved Particle Swarm Optimization Algorithm

    OpenAIRE

    He Wang

    2015-01-01

    Demand prediction of supply chain is an important content and the first premise in supply management of different enterprises and has become one of the difficulties and hot research fields for the researchers related. The paper takes fresh food demand prediction for example and presents a new algorithm for predicting demand of fresh food supply chain. First, the working principle and the root causes of the defects of particle swarm optimization algorithm are analyzed in the study; Second, the...

  2. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  3. Improving students' meaningful learning on the predictive nature of quantum mechanics

    Directory of Open Access Journals (Sweden)

    Rodolfo Alves de Carvalho Neto

    2009-03-01

    Full Text Available This paper deals with research about teaching quantum mechanics to 3rd year high school students and their meaningful learning of its predictive aspect; it is based on the Master’s dissertation of one of the authors (CARVALHO NETO, 2006. While teaching quantum mechanics, we emphasized its predictive and essentially probabilistic nature, based on Niels Bohr’s complementarity interpretation (BOHR, 1958. In this context, we have discussed the possibility of predicting measurement results in well-defined experimental contexts, even for individual events. Interviews with students reveal that they have used quantum mechanical ideas, suggesting their meaningful learning of the essentially probabilistic predictions of quantum mechanics.

  4. Measured glomerular filtration rate does not improve prediction of mortality by cystatin C and creatinine.

    Science.gov (United States)

    Sundin, Per-Ola; Sjöström, Per; Jones, Ian; Olsson, Lovisa A; Udumyan, Ruzan; Grubb, Anders; Lindström, Veronica; Montgomery, Scott

    2017-04-01

    Cystatin C may add explanatory power for associations with mortality in combination with other filtration markers, possibly indicating pathways other than glomerular filtration rate (GFR). However, this has not been firmly established since interpretation of associations independent of measured GFR (mGFR) is limited by potential multicollinearity between markers of GFR. The primary aim of this study was to assess associations between cystatin C and mortality, independent of mGFR. A secondary aim was to evaluate the utility of combining cystatin C and creatinine to predict mortality risk. Cox regression was used to assess the associations of cystatin C and creatinine with mortality in 1157 individuals referred for assessment of plasma clearance of iohexol. Since cystatin C and creatinine are inversely related to mGFR, cystatin C - 1 and creatinine - 1 were used. After adjustment for mGFR, lower cystatin C - 1 (higher cystatin C concentration) and higher creatinine - 1 (lower creatinine concentration) were independently associated with increased mortality. When nested models were compared, avoiding the potential influence of multicollinearity, the independence of the associations was supported. Among models combining the markers of GFR, adjusted for demographic factors and comorbidity, cystatin C - 1 and creatinine - 1 combined explained the largest proportion of variance in associations with mortality risk ( R 2  = 0.61). Addition of mGFR did not improve the model. Our results suggest that both creatinine and cystatin C have independent associations with mortality not explained entirely by mGFR and that mGFR does not offer a more precise mortality risk assessment than these endogenous filtration markers combined. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  5. Improving weather predictability by including land-surface model parameter uncertainty

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  6. Improved methods for prediction of creep-fatigue in next generation conventional and nuclear plant

    International Nuclear Information System (INIS)

    Payten, Warwick

    2012-01-01

    Materials technology poses a major challenge in the design and construction of next generation super critical/ultra super critical power plant (SC/USC) and Generation IV (GenIV) nuclear plant. New plant is expected to have in the order of a 60 year life-time, imposing complex design difficulties in areas of creep rupture and creep fatigue damage. For SC/USC plant, the main goal is the enhancement of performance by raising the steam pressure and temperatures. In order to achieve these goals materials with acceptable creep rupture strength at design temperatures and pressures must be used. In GenIV designs, the issue is more complex, with both low and high tempera-ture designs. A key requirement in the majority of the designs, however, will be acceptable resistance to creep rupture, fatigue cracking, creep fatigue interactions, with the additional effects of void swelling and irradiation creep. The accumulation of creep fatigue damage over time in both SC/USC and GenIV plant will be one of the principal damage mechanisms. This will eventually lead to crack initiation in critical high temperature equipment. Hence, improved knowledge of creep and fatigue interactions is a necessary development as components in power-generating plants move to operate at high temperature under cyclic conditions. The key to safe, reliable operation of these high-energy plants will depend on understanding the factors that affect damage initiation and propagation, as well as developing and validating technologies to predict the accumulation of damage in systems and components.

  7. Sobering stories: narratives of self-redemption predict behavioral change and improved health among recovering alcoholics.

    Science.gov (United States)

    Dunlop, William L; Tracy, Jessica L

    2013-03-01

    The present research examined whether the production of a narrative containing self-redemption (wherein the narrator describes a positive personality change following a negative experience) predicts positive behavioral change. In Study 1, we compared the narratives of alcoholics who had maintained their sobriety for over 4 years with those of alcoholics who had been sober 6 months or less. When describing their last drink, the former were significantly more likely to produce a narrative containing self-redemption than the latter. In Study 2, we examined the relation between the profession of self-redemption and behavioral change using a longitudinal design, by following the newly sober alcoholics from Study 1 over time. Although indistinguishable at initial assessment, newly sober alcoholics whose narratives included self-redemption were substantially more likely to maintain sobriety in the following months, compared to newly sober alcoholics who produced nonredemptive narratives; 83% of the redemptive group maintained sobriety between assessments, compared to 44% of nonredemptive participants. Redemptive participants in Study 2 also demonstrated improved health relative to the nonredemptive group. In both studies, the effects of self-redemption on sobriety and health held after controlling for relevant personality traits, alcohol dependence, recovery program involvement, initial physical and mental health, and additional narrative themes. Collectively, these results suggest that the production of a self-redemptive narrative may stimulate prolonged behavioral change and thus indicate a potentially modifiable psychological process that exhibits a major influence on recovery from addiction. PsycINFO Database Record (c) 2013 APA, all rights reserved

  8. BACE1 elevation engendered by GGA3 deletion increases β-amyloid pathology in association with APP elevation and decreased CHL1 processing in 5XFAD mice.

    Science.gov (United States)

    Kim, WonHee; Ma, Liang; Lomoio, Selene; Willen, Rachel; Lombardo, Sylvia; Dong, Jinghui; Haydon, Philip G; Tesco, Giuseppina

    2018-02-02

    β-site amyloid precursor protein cleaving enzyme 1 (BACE1) is the rate-limiting enzyme in the production of amyloid beta (Aβ), the toxic peptide that accumulates in the brains of Alzheimer's disease (AD) patients. Our previous studies have shown that the clathrin adaptor Golgi-localized γ-ear-containing ARF binding protein 3 (GGA3) plays a key role in the trafficking of BACE1 to lysosomes, where it is normally degraded. GGA3 depletion results in BACE1 stabilization both in vitro and in vivo. Moreover, levels of GGA3 are reduced and inversely related to BACE1 levels in post-mortem brains of AD patients. In order to assess the effect of GGA3 deletion on AD-like phenotypes, we crossed GGA3 -/- mice with 5XFAD mice. BACE1-mediated processing of APP and the cell adhesion molecule L1 like protein (CHL1) was measured as well as levels of Aβ42 and amyloid burden. In 5XFAD mice, we found that hippocampal and cortical levels of GGA3 decreased while BACE1 levels increased with age, similar to what is observed in human AD brains. GGA3 deletion prevented age-dependent elevation of BACE1 in GGA3KO;5XFAD mice. We also found that GGA3 deletion resulted in increased hippocampal levels of Aβ42 and amyloid burden in 5XFAD mice at 12 months of age. While levels of BACE1 did not change with age and gender in GGAKO;5XFAD mice, amyloid precursor protein (APP) levels increased with age and were higher in female mice. Moreover, elevation of APP was associated with a decreased BACE1-mediated processing of CHL1 not only in 12 months old 5XFAD mice but also in human brains from subjects affected by Down syndrome, most likely due to substrate competition. This study demonstrates that GGA3 depletion is a leading candidate mechanism underlying elevation of BACE1 in AD. Furthermore, our findings suggest that BACE1 inhibition could exacerbate mechanism-based side effects in conditions associated with APP elevation (e.g. Down syndrome) owing to impairment of BACE1-mediated processing of CHL1

  9. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  10. An improved grey model for the prediction of real-time GPS satellite clock bias

    Science.gov (United States)

    Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.

    2008-07-01

    In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.

  11. Improving runoff prediction using agronomical information in a cropped, loess covered catchment

    NARCIS (Netherlands)

    Lefrancq, Marie; Van Dijk, Paul; Jetten, Victor; Schwob, Matthieu; Payraudeau, Sylvain

    2017-01-01

    Predicting runoff hot spots and hot-moments within a headwater crop-catchment is of the utmost importance to reduce adverse effects on aquatic ecosystems by adapting land use management to control runoff. Reliable predictions of runoff patterns during a crop growing season remain challenging. This

  12. Improving performance of single-path code through a time-predictable memory hierarchy

    DEFF Research Database (Denmark)

    Cilku, Bekim; Puffitsch, Wolfgang; Prokesch, Daniel

    2017-01-01

    -predictable memory hierarchy with a prefetcher that exploits the predictability of execution traces in single-path code to speed up code execution. The new memory hierarchy reduces both the cache-miss penalty time and the cache-miss rate on the instruction cache. The benefit of the approach is demonstrated through...

  13. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  14. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  15. A community effort to assess and improve drug sensitivity prediction algorithms.

    Science.gov (United States)

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2014-12-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods.

  16. Improvement of Risk Prediction After Transcatheter Aortic Valve Replacement by Combining Frailty With Conventional Risk Scores.

    Science.gov (United States)

    Schoenenberger, Andreas W; Moser, André; Bertschi, Dominic; Wenaweser, Peter; Windecker, Stephan; Carrel, Thierry; Stuck, Andreas E; Stortecky, Stefan

    2018-02-26

    This study sought to evaluate whether frailty improves mortality prediction in combination with the conventional scores. European System for Cardiac Operative Risk Evaluation (EuroSCORE) or Society of Thoracic Surgeons (STS) score have not been evaluated in combined models with frailty for mortality prediction after transcatheter aortic valve replacement (TAVR). This prospective cohort comprised 330 consecutive TAVR patients ≥70 years of age. Conventional scores and a frailty index (based on assessment of cognition, mobility, nutrition, and activities of daily living) were evaluated to predict 1-year all-cause mortality using Cox proportional hazards regression (providing hazard ratios [HRs] with confidence intervals [CIs]) and measures of test performance (providing likelihood ratio [LR] chi-square test statistic and C-statistic [CS]). All risk scores were predictive of the outcome (EuroSCORE, HR: 1.90 [95% CI: 1.45 to 2.48], LR chi-square test statistic 19.29, C-statistic 0.67; STS score, HR: 1.51 [95% CI: 1.21 to 1.88], LR chi-square test statistic 11.05, C-statistic 0.64; frailty index, HR: 3.29 [95% CI: 1.98 to 5.47], LR chi-square test statistic 22.28, C-statistic 0.66). A combination of the frailty index with either EuroSCORE (LR chi-square test statistic 38.27, C-statistic 0.72) or STS score (LR chi-square test statistic 28.71, C-statistic 0.68) improved mortality prediction. The frailty index accounted for 58.2% and 77.6% of the predictive information in the combined model with EuroSCORE and STS score, respectively. Net reclassification improvement and integrated discrimination improvement confirmed that the added frailty index improved risk prediction. This is the first study showing that the assessment of frailty significantly enhances prediction of 1-year mortality after TAVR in combined risk models with conventional risk scores and relevantly contributes to this improvement. Copyright © 2018 American College of Cardiology Foundation

  17. Plasma proteomics classifiers improve risk prediction for renal disease in patients with hypertension or type 2 diabetes

    DEFF Research Database (Denmark)

    Pena, Michelle J; Jankowski, Joachim; Heinze, Georg

    2015-01-01

    OBJECTIVE: Micro and macroalbuminuria are strong risk factors for progression of nephropathy in patients with hypertension or type 2 diabetes. Early detection of progression to micro and macroalbuminuria may facilitate prevention and treatment of renal diseases. We aimed to develop plasma...... proteomics classifiers to predict the development of micro or macroalbuminuria in hypertension or type 2 diabetes. METHODS: Patients with hypertension (n = 125) and type 2 diabetes (n = 82) were selected for this case-control study from the Prevention of REnal and Vascular ENd-stage Disease cohort....... RESULTS: In hypertensive patients, the classifier improved risk prediction for transition in albuminuria stage on top of the reference model (C-index from 0.69 to 0.78; P diabetes, the classifier improved risk prediction for transition from micro to macroalbuminuria (C-index from 0...

  18. Pathological fracture prediction in patients with metastatic lesions can be improved with quantitative computed tomography based computer models

    NARCIS (Netherlands)

    Tanck, Esther; van Aken, Jantien B.; van der Linden, Yvette M.; Schreuder, H.W. Bart; Binkowski, Marcin; Huizenga, Henk; Verdonschot, Nico

    2009-01-01

    Purpose: In clinical practice, there is an urgent need to improve the prediction of fracture risk for cancer patients with bone metastases. The methods that are currently used to estimate fracture risk are dissatisfying, hence affecting the quality of life of patients with a limited life expectancy.

  19. CT image biomarkers to improve patient-specific prediction of radiation-induced xerostomia and sticky saliva

    NARCIS (Netherlands)

    van Dijk, Lisanne V.; Brouwer, Charlotte L.; van der Schaaf, Arjen; Burgerhof, Johannes G. M.; Beukinga, Roelof J.; Langendijk, Johannes A.; Sijtsema, Nanna M.; Steenbakkers, Roel J. H. M.

    Background and purpose: Current models for the prediction of late patient-rated moderate-to-severe xerostomia (XER12m) and sticky saliva (STIC12m) after radiotherapy are based on dose-volume parameters and baseline xerostomia (XERbase) or sticky saliva (STICbase) scores. The purpose is to improve

  20. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  1. Suspended Matter, Chl-a, CDOM, Grain Sizes, and Optical Properties in the Arctic Fjord-Type Estuary, Kangerlussuaq, West Greenland During Summer

    DEFF Research Database (Denmark)

    Lund-Hansen, L. C.; Andersen, T. J.; Nielsen, Morten Holtegaard

    2010-01-01

    Optical constituents as suspended particulate matter (SPM), chlorophyll (Chl-a), colored dissolved organic matter (CDOM), and grain sizes were obtained on a transect in the arctic fjord-type estuary Kangerlussuaq (66A degrees) in August 2007 along with optical properties. These comprised diffuse...... water outlet. Values of optical constituents and properties decreased with distance from the melt water outlet to a more or less constant level in central and outer part of the estuary. There was a strong correlation between inorganic suspended matter (SPMI) and diffuse attenuation coefficient K (d...... from the very high turbid melt water outlet to clear marine waters. Results showed a strong spatial variation with high values as for suspended matter concentrations, CDOM, diffuse attenuation coefficient K (d)(PAR), particle beam attenuation coefficients (c (p)), and reflectance R(-0, PAR) at the melt...

  2. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    Science.gov (United States)

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Improving Robustness of Hydrologic Ensemble Predictions Through Probabilistic Pre- and Post-Processing in Sequential Data Assimilation

    Science.gov (United States)

    Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.

    2018-03-01

    Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.

  4. Power maximization of a point absorber wave energy converter using improved model predictive control

    Science.gov (United States)

    Milani, Farideh; Moghaddam, Reihaneh Kardehi

    2017-08-01

    This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.

  5. An improved method for predicting the effects of flight on jet mixing noise

    Science.gov (United States)

    Stone, J. R.

    1979-01-01

    A method for predicting the effects of flight on jet mixing noise has been developed on the basis of the jet noise theory of Ffowcs-Williams (1963) and data derived from model-jet/free-jet simulated flight tests. Predicted and experimental values are compared for the J85 turbojet engine on the Bertin Aerotrain, the low-bypass refanned JT8D engine on a DC-9, and the high-bypass JT9D engine on a DC-10. Over the jet velocity range from 280 to 680 m/sec, the predictions show a standard deviation of 1.5 dB.

  6. A genetic risk score combining ten psoriasis risk loci improves disease prediction.

    Directory of Open Access Journals (Sweden)

    Haoyan Chen

    2011-04-01

    Full Text Available Psoriasis is a chronic, immune-mediated skin disease affecting 2-3% of Caucasians. Recent genetic association studies have identified multiple psoriasis risk loci; however, most of these loci contribute only modestly to disease risk. In this study, we investigated whether a genetic risk score (GRS combining multiple loci could improve psoriasis prediction. Two approaches were used: a simple risk alleles count (cGRS and a weighted (wGRS approach. Ten psoriasis risk SNPs were genotyped in 2815 case-control samples and 858 family samples. We found that the total number of risk alleles in the cases was significantly higher than in controls, mean 13.16 (SD 1.7 versus 12.09 (SD 1.8, p = 4.577×10(-40. The wGRS captured considerably more risk than any SNP considered alone, with a psoriasis OR for high-low wGRS quartiles of 10.55 (95% CI 7.63-14.57, p = 2.010×10(-65. To compare the discriminatory ability of the GRS models, receiver operating characteristic curves were used to calculate the area under the curve (AUC. The AUC for wGRS was significantly greater than for cGRS (72.0% versus 66.5%, p = 2.13×10(-8. Additionally, the AUC for HLA-C alone (rs10484554 was equivalent to the AUC for all nine other risk loci combined (66.2% versus 63.8%, p = 0.18, highlighting the dominance of HLA-C as a risk locus. Logistic regression revealed that the wGRS was significantly associated with two subphenotypes of psoriasis, age of onset (p = 4.91×10(-6 and family history (p = 0.020. Using a liability threshold model, we estimated that the 10 risk loci account for only 11.6% of the genetic variance in psoriasis. In summary, we found that a GRS combining 10 psoriasis risk loci captured significantly more risk than any individual SNP and was associated with early onset of disease and a positive family history. Notably, only a small fraction of psoriasis heritability is captured by the common risk variants identified to date.

  7. Improving predictive power of physically based rainfall-induced shallow landslide models: a probabilistic approach

    Directory of Open Access Journals (Sweden)

    S. Raia

    2014-03-01

    of several model runs obtained varying the input parameters are analyzed statistically, and compared to the original (deterministic model output. The comparison suggests an improvement of the predictive power of the model of about 10% and 16% in two small test areas, that is, the Frontignano (Italy and the Mukilteo (USA areas. We discuss the computational requirements of TRIGRS-P to determine the potential use of the numerical model to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides in very large areas, extending for several hundreds or thousands of square kilometers. Parallel execution of the code using a simple process distribution and the message passing interface (MPI on multi-processor machines was successful, opening the possibly of testing the use of TRIGRS-P for the operational forecasting of rainfall-induced shallow landslides over large regions.

  8. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    Science.gov (United States)

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    are analyzed statistically, and compared to the original (deterministic) model output. The comparison suggests an improvement of the predictive power of the model of about 10% and 16% in two small test areas, i.e. the Frontignano (Italy) and the Mukilteo (USA) areas, respectively. We discuss the computational requirements of TRIGRS-P to determine the potential use of the numerical model to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides in very large areas, extending for several hundreds or thousands of square kilometers. Parallel execution of the code using a simple process distribution and the Message Passing Interface (MPI) on multi-processor machines was successful, opening the possibly of testing the use of TRIGRS-P for the operational forecasting of rainfall-induced shallow landslides over large regions.

  9. Dust in HTRs: Its nature and improving prediction of its resuspension

    Energy Technology Data Exchange (ETDEWEB)

    Kissane, M.P., E-mail: martin.kissane@irsn.fr [Institut de Radioprotection et de Surete Nucleaire, Division de la Prevention des Accidents Majeurs, BP 3, 13115 Saint-Paul-lez-Durance (France); Zhang, F. [Institut de Radioprotection et de Surete Nucleaire, Division de la Prevention des Accidents Majeurs, BP 3, 13115 Saint-Paul-lez-Durance (France); University of Newcastle, School of Mechanical and Systems Engineering, Stephenson Building, Claremont Road, Newcastle-upon-Tyne NE1 7RU (United Kingdom); Reeks, M.W. [University of Newcastle, School of Mechanical and Systems Engineering, Stephenson Building, Claremont Road, Newcastle-upon-Tyne NE1 7RU (United Kingdom)

    2012-10-15

    The HTR primary-system environment comprises nuclear graphites, alloys, dust (primarily carbonaceous) and high-purity helium. The amount of carbonaceous dust produced in a pebble-bed system would be considerably greater than one using a prismatic core with a significant contribution arising from the partially-graphitized binder of the pebbles. The dust is very fine, <10 {mu}m in size. Experience with HTRs shows the primary system to be contaminated by the isotopes {sup 134}Cs, {sup 137}Cs, {sup 90}Sr, {sup 110m}Ag, {sup 131}I, {sup 135}Xe, {sup 85}Kr and tritium at a level representing an occupational-health issue rather than a safety issue. However, strong sorption of caesium, strontium, iodine and tritium onto carbonaceous dust has been observed. Hence, the extent to which deposited dust can be resuspended during a depressurization accident is a safety issue since the dust comprises the main vector for release of radioactivity into the confinement. For fine dust on a surface, the principal force keeping it in place arises from inter-molecular (van der Waals) forces while aerodynamic forces, mainly drag, act to remove it. The reference model chosen here for improving resuspension predictions is the so-called Rock'n'Roll model. This model is based on a statistical approach leading to a resuspension rate for the escape of particles from a potential well via the action of the fluctuating aerodynamic force caused by turbulence. The as-published Rock'n'Roll model assumes that the fluctuations of the aerodynamic force obey a Gaussian distribution. Here, we introduce calculated statistics for the fluctuations taken from a large-eddy simulation of turbulent channel flow (work is in progress on generating these statistics using direct numerical simulation of turbulence). The overall influence of more-realistic (non-Gaussian) forces on the resuspension rate is found to be an increase in short-term resuspension. Given this and the fact that the adhesive

  10. P-wave characteristics on routine preoperative electrocardiogram improve prediction of new-onset postoperative atrial fibrillation in cardiac surgery.

    Science.gov (United States)

    Wong, Jim K; Lobato, Robert L; Pinesett, Andre; Maxwell, Bryan G; Mora-Mangano, Christina T; Perez, Marco V

    2014-12-01

    To test the hypothesis that including preoperative electrocardiogram (ECG) characteristics with clinical variables significantly improves the new-onset postoperative atrial fibrillation prediction model. Retrospective analysis. Single-center university hospital. Five hundred twenty-six patients, ≥ 18 years of age, who underwent coronary artery bypass grafting, aortic valve replacement, mitral valve replacement/repair, or a combination of valve surgery and coronary artery bypass grafting requiring cardiopulmonary bypass. Retrospective review of medical records. Baseline characteristics and cardiopulmonary bypass times were collected. Digitally-measured timing and voltages from preoperative electrocardiograms were extracted. Postoperative atrial fibrillation was defined as atrial fibrillation requiring therapeutic intervention. Two hundred eight (39.5%) patients developed postoperative atrial fibrillation. Clinical predictors were age, ejection fractionelectrocardiogram variables to the prediction model with only clinical predictors significantly improved the area under the receiver operating characteristic curve, from 0.71 to 0.78 (p<0.01). Overall net reclassification improvement was 0.059 (p = 0.09). Among those who developed postoperative atrial fibrillation, the net reclassification improvement was 0.063 (p = 0.03). Several p-wave characteristics are independently associated with postoperative atrial fibrillation. Addition of these parameters improves the postoperative atrial fibrillation prediction model. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Satellite retrievals of leaf chlorophyll and photosynthetic capacity for improved modeling of GPP

    KAUST Repository

    Houborg, Rasmus; Cescatti, Alessandro; Migliavacca, Mirco; Kustas, W.P.

    2013-01-01

    This study investigates the utility of in situ and satellite-based leaf chlorophyll (Chl) estimates for quantifying leaf photosynthetic capacity and for constraining model simulations of Gross Primary Productivity (GPP) over a corn field in Maryland, U.S.A. The maximum rate of carboxylation (V-max) represents a key control on leaf photosynthesis within the widely employed C-3 and C-4 photosynthesis models proposed by Farquhar et al. (1980) and Collatz et al. (1992), respectively. A semi-mechanistic relationship between V-max(5) (V-max normalized to 25 degrees C) and Chl is derived based on interlinkages between V-max(25), Rubisco enzyme kinetics, leaf nitrogen, and Chl reported in the experimental literature. The resulting linear V-max(25) - Chl relationship is embedded within the photosynthesis scheme of the Community Land Model (CLM), thereby bypassing the use of fixed plant functional type (PFT) specific V-max(25) values. The effect of the updated parameterization on simulated carbon fluxes is tested over a corn field growing season using: (1) a detailed Chl time-series established on the basis of intensive field measurements and (2) Chl estimates derived from Landsat imagery using the REGularized canopy reFLECtance (REGFLEC) tool. Validations against flux tower observations demonstrate benefit of using Chl to parameterize V-max(25) to account for variations in nitrogen availability imposed by severe environmental conditions. The use of V-max(25) that varied seasonally as a function of satellite-based Chl, rather than a fixed PFT-specific value, significantly improved the agreement with observed tower fluxes with Pearson's correlation coefficient (r) increasing from 0.88 to 0.93 and the root-mean-square-deviation decreasing from 4.77 to 3.48 mu mol m(-2) s(-1). The results support the use of Chl as a proxy for photosynthetic capacity using generalized relationships between V-max(25) and Chl, and advocate the potential of satellite retrieved Chl for constraining

  12. Satellite retrievals of leaf chlorophyll and photosynthetic capacity for improved modeling of GPP

    KAUST Repository

    Houborg, Rasmus

    2013-08-01

    This study investigates the utility of in situ and satellite-based leaf chlorophyll (Chl) estimates for quantifying leaf photosynthetic capacity and for constraining model simulations of Gross Primary Productivity (GPP) over a corn field in Maryland, U.S.A. The maximum rate of carboxylation (V-max) represents a key control on leaf photosynthesis within the widely employed C-3 and C-4 photosynthesis models proposed by Farquhar et al. (1980) and Collatz et al. (1992), respectively. A semi-mechanistic relationship between V-max(5) (V-max normalized to 25 degrees C) and Chl is derived based on interlinkages between V-max(25), Rubisco enzyme kinetics, leaf nitrogen, and Chl reported in the experimental literature. The resulting linear V-max(25) - Chl relationship is embedded within the photosynthesis scheme of the Community Land Model (CLM), thereby bypassing the use of fixed plant functional type (PFT) specific V-max(25) values. The effect of the updated parameterization on simulated carbon fluxes is tested over a corn field growing season using: (1) a detailed Chl time-series established on the basis of intensive field measurements and (2) Chl estimates derived from Landsat imagery using the REGularized canopy reFLECtance (REGFLEC) tool. Validations against flux tower observations demonstrate benefit of using Chl to parameterize V-max(25) to account for variations in nitrogen availability imposed by severe environmental conditions. The use of V-max(25) that varied seasonally as a function of satellite-based Chl, rather than a fixed PFT-specific value, significantly improved the agreement with observed tower fluxes with Pearson\\'s correlation coefficient (r) increasing from 0.88 to 0.93 and the root-mean-square-deviation decreasing from 4.77 to 3.48 mu mol m(-2) s(-1). The results support the use of Chl as a proxy for photosynthetic capacity using generalized relationships between V-max(25) and Chl, and advocate the potential of satellite retrieved Chl for

  13. Numerical Weather Prediction and Relative Economic Value framework to improve Integrated Urban Drainage- Wastewater management

    DEFF Research Database (Denmark)

    Courdent, Vianney Augustin Thomas

    domains during which the IUDWS can be coupled with the electrical smart grid to optimise its energy consumption. The REV framework was used to determine which decision threshold of the EPS (i.e. number of ensemble members predicting an event) provides the highest benefit for a given situation...... in cities where space is scarce and large-scale construction work a nuisance. This the-sis focuses on flow domain predictions of IUDWS from numerical weather prediction (NWP) to select relevant control objectives for the IUDWS and develops a framework based on the relative economic value (REV) approach...... to evaluate when acting on the forecast is beneficial or not. Rainfall forecasts are extremely valuable for estimating near future storm-water-related impacts on the IUDWS. Therefore, weather radar extrapolation “nowcasts” provide valuable predictions for RTC. However, radar nowcasts are limited...

  14. Improving genomic prediction for Danish Jersey using a joint Danish-US reference population

    DEFF Research Database (Denmark)

    Su, Guosheng; Nielsen, Ulrik Sander; Wiggans, G

    Accuracy of genomic prediction depends on the information in the reference population. Achieving an adequate sized reference population is a challenge for genomic prediction in small cattle populations. One way to increase the size of reference population is to combine reference data from different...... populations. The objective of this study was to assess the gain of genomic prediction accuracy when including US Jersey bulls in the Danish Jersey reference population. The data included 1,262 Danish progeny-tested bulls and 1,157 US progeny-tested bulls. Genomic breeding values (GEBV) were predicted using...... a GBLUP model from the Danish reference population and the joint Danish-US reference population. The traits in the analysis were milk yield, fat yield, protein yield, fertility, mastitis, longevity, body conformation, feet & legs, and longevity. Eight of the nine traits benefitted from the inclusion of US...

  15. MultiLoc2: integrating phylogeny and Gene Ontology terms improves subcellular protein localization prediction

    Directory of Open Access Journals (Sweden)

    Kohlbacher Oliver

    2009-09-01

    Full Text Available Abstract Background Knowledge of subcellular localization of proteins is crucial to proteomics, drug target discovery and systems biology since localization and biological function are highly correlated. In recent years, numerous computational prediction methods have been developed. Nevertheless, there is still a need for prediction methods that show more robustness and higher accuracy. Results We extended our previous MultiLoc predictor by incorporating phylogenetic profiles and Gene Ontology terms. Two different datasets were used for training the system, resulting in two versions of this high-accuracy prediction method. One version is specialized for globular proteins and predicts up to five localizations, whereas a second version covers all eleven main eukaryotic subcellular localizations. In a benchmark study with five localizations, MultiLoc2 performs considerably better than other methods for animal and plant proteins and comparably for fungal proteins. Furthermore, MultiLoc2 performs clearly better when using a second dataset that extends the benchmark study to all eleven main eukaryotic subcellular localizations. Conclusion MultiLoc2 is an extensive high-performance subcellular protein localization prediction system. By incorporating phylogenetic profiles and Gene Ontology terms MultiLoc2 yields higher accuracies compared to its previous version. Moreover, it outperforms other prediction systems in two benchmarks studies. MultiLoc2 is available as user-friendly and free web-service, available at: http://www-bs.informatik.uni-tuebingen.de/Services/MultiLoc2.

  16. Global proteomics profiling improves drug sensitivity prediction: results from a multi-omics, pan-cancer modeling approach.

    Science.gov (United States)

    Ali, Mehreen; Khan, Suleiman A; Wennerberg, Krister; Aittokallio, Tero

    2018-04-15

    Proteomics profiling is increasingly being used for molecular stratification of cancer patients and cell-line panels. However, systematic assessment of the predictive power of large-scale proteomic technologies across various drug classes and cancer types is currently lacking. To that end, we carried out the first pan-cancer, multi-omics comparative analysis of the relative performance of two proteomic technologies, targeted reverse phase protein array (RPPA) and global mass spectrometry (MS), in terms of their accuracy for predicting the sensitivity of cancer cells to both cytotoxic chemotherapeutics and molecularly targeted anticancer compounds. Our results in two cell-line panels demonstrate how MS profiling improves drug response predictions beyond that of the RPPA or the other omics profiles when used alone. However, frequent missing MS data values complicate its use in predictive modeling and required additional filtering, such as focusing on completely measured or known oncoproteins, to obtain maximal predictive performance. Rather strikingly, the two proteomics profiles provided complementary predictive signal both for the cytotoxic and targeted compounds. Further, information about the cellular-abundance of primary target proteins was found critical for predicting the response of targeted compounds, although the non-target features also contributed significantly to the predictive power. The clinical relevance of the selected protein markers was confirmed in cancer patient data. These results provide novel insights into the relative performance and optimal use of the widely applied proteomic technologies, MS and RPPA, which should prove useful in translational applications, such as defining the best combination of omics technologies and marker panels for understanding and predicting drug sensitivities in cancer patients. Processed datasets, R as well as Matlab implementations of the methods are available at https://github.com/mehr-een/bemkl-rbps. mehreen

  17. Improvement of NO and CO predictions for a homogeneous combustion SI engine using a novel emissions model

    International Nuclear Information System (INIS)

    Karvountzis-Kontakiotis, Apostolos; Ntziachristos, Leonidas

    2016-01-01

    Highlights: • Presentation of a novel emissions model to predict pollutants formation in engines. • Model based on detailed chemistry, requires no application-specific calibration. • Combined with 0D and 1D combustion models with low additional computational cost. • Demonstrates accurate prediction of cyclic variability of pollutants emissions. - Abstract: This study proposes a novel emissions model for the prediction of spark ignition (SI) engine emissions at homogeneous combustion conditions, using post combustion analysis and a detailed chemistry mechanism. The novel emissions model considers an unburned and a burned zone, where the latter is considered as a homogeneous reactor and is modeled using a detailed chemical kinetics mechanism. This allows detailed emission predictions at high speed practically based only on combustion pressure and temperature profiles, without the need for calibration of the model parameters. The predictability of the emissions model is compared against the extended Zeldovich mechanism for NO and a simplified two-step reaction kinetic model for CO, which both constitute the most widespread existing approaches in the literature. Under various engine load and speed conditions examined, the mean error in NO prediction was 28% for the existing models and less than 1.3% for the new model proposed. The novel emissions model was also used to predict emissions variation due to cyclic combustion variability and demonstrated mean prediction error of 6% and 3.6% for NO and CO respectively, compared to 36% (NO) and 67% (CO) for the simplified model. The results show that the emissions model proposed offers substantial improvements in the prediction of the results without significant increase in calculation time.

  18. Improved methods of online monitoring and prediction in condensate and feed water system of nuclear power plant

    International Nuclear Information System (INIS)

    Wang, Hang; Peng, Min-jun; Wu, Peng; Cheng, Shou-yu

    2016-01-01

    Highlights: • Different methods for online monitoring and diagnosis are summarized. • Numerical simulation modeling of condensate and feed water system in nuclear power plant are done by FORTRAN programming. • Integrated online monitoring and prediction methods have been developed and tested. • Online monitoring module, fault diagnosis module and trends prediction module can be verified with each other. - Abstract: Faults or accidents may occur in a nuclear power plant (NPP), but it is hard for operators to recognize the situation and take effective measures quickly. So, online monitoring, diagnosis and prediction (OMDP) is used to provide enough information to operators and improve the safety of NPPs. In this paper, distributed conservation equation (DCE) and artificial immunity system (AIS) are proposed for online monitoring and diagnosis. On this basis, quantitative simulation models and interactive database are combined to predict the trends and severity of faults. The effectiveness of OMDP in improving the monitoring and prediction of condensate and feed water system (CFWS) was verified through simulation tests.

  19. Combining sequence-based prediction methods and circular dichroism and infrared spectroscopic data to improve protein secondary structure determinations

    Directory of Open Access Journals (Sweden)

    Lees Jonathan G

    2008-01-01

    Full Text Available Abstract Background A number of sequence-based methods exist for protein secondary structure prediction. Protein secondary structures can also be determined experimentally from circular dichroism, and infrared spectroscopic data using empirical analysis methods. It has been proposed that comparable accuracy can be obtained from sequence-based predictions as from these biophysical measurements. Here we have examined the secondary structure determination accuracies of sequence prediction methods with the empirically determined values from the spectroscopic data on datasets of proteins for which both crystal structures and spectroscopic data are available. Results In this study we show that the sequence prediction methods have accuracies nearly comparable to those of spectroscopic methods. However, we also demonstrate that combining the spectroscopic and sequences techniques produces significant overall improvements in secondary structure determinations. In addition, combining the extra information content available from synchrotron radiation circular dichroism data with sequence methods also shows improvements. Conclusion Combining sequence prediction with experimentally determined spectroscopic methods for protein secondary structure content significantly enhances the accuracy of the overall results obtained.

  20. On the Use of Backward Difference Formulae to Improve the Prediction of Direction in Market Related Data

    Directory of Open Access Journals (Sweden)

    E. Momoniat

    2013-01-01

    Full Text Available The use of a BDF method as a tool to correct the direction of predictions made using curve fitting techniques is investigated. Random data is generated in such a fashion that it has the same properties as the data we are modelling. The data is assumed to have “memory” such that certain information imbedded in the data will remain within a certain range of points. Data within this period where “memory” exists—say at time steps t1,t2,…,tn—is curve-fitted to produce a prediction at the next discrete time step, tn+1. In this manner a vector of predictions is generated and converted into a discrete ordinary differential representing the gradient of the data. The BDF method implemented with this lower order approximation is used as a means of improving upon the direction of the generated predictions. The use of the BDF method in this manner improves the prediction of the direction of the time series by approximately 30%.

  1. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

    Science.gov (United States)

    Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

    2017-12-01

    Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

  2. CT image biomarkers to improve patient-specific prediction of radiation-induced xerostomia and sticky saliva.

    Science.gov (United States)

    van Dijk, Lisanne V; Brouwer, Charlotte L; van der Schaaf, Arjen; Burgerhof, Johannes G M; Beukinga, Roelof J; Langendijk, Johannes A; Sijtsema, Nanna M; Steenbakkers, Roel J H M

    2017-02-01

    Current models for the prediction of late patient-rated moderate-to-severe xerostomia (XER 12m ) and sticky saliva (STIC 12m ) after radiotherapy are based on dose-volume parameters and baseline xerostomia (XER base ) or sticky saliva (STIC base ) scores. The purpose is to improve prediction of XER 12m and STIC 12m with patient-specific characteristics, based on CT image biomarkers (IBMs). Planning CT-scans and patient-rated outcome measures were prospectively collected for 249 head and neck cancer patients treated with definitive radiotherapy with or without systemic treatment. The potential IBMs represent geometric, CT intensity and textural characteristics of the parotid and submandibular glands. Lasso regularisation was used to create multivariable logistic regression models, which were internally validated by bootstrapping. The prediction of XER 12m could be improved significantly by adding the IBM "Short Run Emphasis" (SRE), which quantifies heterogeneity of parotid tissue, to a model with mean contra-lateral parotid gland dose and XER base . For STIC 12m , the IBM maximum CT intensity of the submandibular gland was selected in addition to STIC base and mean dose to submandibular glands. Prediction of XER 12m and STIC 12m was improved by including IBMs representing heterogeneity and density of the salivary glands, respectively. These IBMs could guide additional research to the patient-specific response of healthy tissue to radiation dose. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    Energy Technology Data Exchange (ETDEWEB)

    Beckon, William N., E-mail: William_Beckon@fws.gov

    2016-07-15

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  4. A method for improving predictive modeling by taking into account lag time: Example of selenium bioaccumulation in a flowing system

    International Nuclear Information System (INIS)

    Beckon, William N.

    2016-01-01

    Highlights: • A method for estimating response time in cause-effect relationships is demonstrated. • Predictive modeling is appreciably improved by taking into account this lag time. • Bioaccumulation lag is greater for organisms at higher trophic levels. • This methodology may be widely applicable in disparate disciplines. - Abstract: For bioaccumulative substances, efforts to predict concentrations in organisms at upper trophic levels, based on measurements of environmental exposure, have been confounded by the appreciable but hitherto unknown amount of time it may take for bioaccumulation to occur through various pathways and across several trophic transfers. The study summarized here demonstrates an objective method of estimating this lag time by testing a large array of potential lag times for selenium bioaccumulation, selecting the lag that provides the best regression between environmental exposure (concentration in ambient water) and concentration in the tissue of the target organism. Bioaccumulation lag is generally greater for organisms at higher trophic levels, reaching times of more than a year in piscivorous fish. Predictive modeling of bioaccumulation is improved appreciably by taking into account this lag. More generally, the method demonstrated here may improve the accuracy of predictive modeling in a wide variety of other cause-effect relationships in which lag time is substantial but inadequately known, in disciplines as diverse as climatology (e.g., the effect of greenhouse gases on sea levels) and economics (e.g., the effects of fiscal stimulus on employment).

  5. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  6. Noninvasive work of breathing improves prediction of post-extubation outcome.

    Science.gov (United States)

    Banner, Michael J; Euliano, Neil R; Martin, A Daniel; Al-Rawas, Nawar; Layon, A Joseph; Gabrielli, Andrea

    2012-02-01

    We hypothesized that non-invasively determined work of breathing per minute (WOB(N)/min) (esophageal balloon not required) may be useful for predicting extubation outcome, i.e., appropriate work of breathing values may be associated with extubation success, while inappropriately increased values may be associated with failure. Adult candidates for extubation were divided into a training set (n = 38) to determine threshold values of indices for assessing extubation and a prospective validation set (n = 59) to determine the predictive power of the threshold values for patients successfully extubated and those who failed extubation. All were evaluated for extubation during a spontaneous breathing trial (5 cmH(2)O pressure support ventilation, 5 cmH(2)O positive end expiratory pressure) using routine clinical practice standards. WOB(N)/min data were blinded to attending physicians. Area under the receiver operating characteristic curves (AUC), sensitivity, specificity, and positive and negative predictive values of all extubation indices were determined. AUC for WOB(N)/min was 0.96 and significantly greater (p indices. WOB(N)/min had a specificity of 0.83, the highest sensitivity at 0.96, positive predictive value at 0.84, and negative predictive value at 0.96 compared to all indices. For 95% of those successfully extubated, WOB(N)/min was ≤10 J/min. WOB(N)/min had the greatest overall predictive accuracy for extubation compared to traditional indices. WOB(N)/min warrants consideration for use in a complementary manner with spontaneous breathing pattern data for predicting extubation outcome.

  7. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  8. Numerical prediction of cavitating flow around a hydrofoil using pans and improved shear stress transport k-omega model

    Directory of Open Access Journals (Sweden)

    Zhang De-Sheng

    2015-01-01

    Full Text Available The prediction accuracies of partially-averaged Navier-Stokes model and improved shear stress transport k-ω turbulence model for simulating the unsteady cavitating flow around the hydrofoil were discussed in this paper. Numerical results show that the two turbulence models can effectively reproduce the cavitation evolution process. The numerical prediction for the cycle time of cavitation inception, development, detachment, and collapse agrees well with the experimental data. It is found that the vortex pair induced by the interaction between the re-entrant jet and mainstream is responsible for the instability of the cavitation shedding flow.

  9. Somatic growth of mussels Mytilus edulis in field studies compared to predictions using BEG, DEB, and SFG models

    DEFF Research Database (Denmark)

    Larsen, Poul Scheel; Filgueira, Ramón; Riisgård, Hans Ulrik

    2014-01-01

    Prediction of somatic growth of blue mussels, Mytilus edulis, based on the data from 2 field-growth studies of mussels in suspended net-bags in Danish waters was made by 3 models: the bioenergetic growth (BEG), the dynamic energy budget (DEB), and the scope for growth (SFG). Here, the standard BEG...... at nearly constant environmental conditions with a mean chl a concentration of C=2.7μgL−1, and the observed monotonous growth in the dry weight of soft parts was best predicted by DEB while BEG and SFG models produced lower growth. The second 165-day field study was affected by large variations in chl...... a and temperature, and the observed growth varied accordingly, but nevertheless, DEB and SFG predicted monotonous growth in good agreement with the mean pattern while BEG mimicked the field data in response to observed changes in chl a concentration and temperature. The general features of the models were that DEB...

  10. Using a Simple Binomial Model to Assess Improvement in Predictive Capability: Sequential Bayesian Inference, Hypothesis Testing, and Power Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory

    2012-09-11

    We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a

  11. Validations and improvements of airfoil trailing-edge noise prediction models using detailed experimental data

    DEFF Research Database (Denmark)

    Kamruzzaman, M.; Lutz, Th.; Würz, W.

    2012-01-01

    This paper describes an extensive assessment and a step by step validation of different turbulent boundary-layer trailing-edge noise prediction schemes developed within the European Union funded wind energy project UpWind. To validate prediction models, measurements of turbulent boundary-layer pr...... with measurements in the frequency region higher than 1 kHz, whereas they over-predict the sound pressure level in the low-frequency region. Copyright © 2011 John Wiley & Sons, Ltd.......-layer properties such as two-point turbulent velocity correlations, the spectra of the associated wall pressure fluctuations and the emitted trailing-edge far-field noise were performed in the laminar wind tunnel of the Institute of Aerodynamics and Gas Dynamics, University of Stuttgart. The measurements were...... carried out for a NACA 643-418 airfoil, at Re  =  2.5 ×106, angle of attack of −6° to 6°. Numerical results of different prediction schemes are extensively validated and discussed elaborately. The investigations on the TNO-Blake noise prediction model show that the numerical wall pressure fluctuation...

  12. Restraint status improves the predictive value of motor vehicle crash criteria for pediatric trauma team activation.

    Science.gov (United States)

    Bozeman, Andrew P; Dassinger, Melvin S; Recicar, John F; Smith, Samuel D; Rettiganti, Mallikarjuna R; Nick, Todd G; Maxson, Robert T

    2012-12-01

    Most trauma centers incorporate mechanistic criteria (MC) into their algorithm for trauma team activation (TTA). We hypothesized that characteristics of the crash are less reliable than restraint status in predicting significant injury and the need for TTA. We identified 271 patients (age, <15 y) admitted with a diagnosis of motor vehicle crash. Mechanistic criteria and restraint status of each patient were recorded. Both MC and MC plus restraint status were evaluated as separate measures for appropriately predicting TTA based on treatment outcomes and injury scores. Improper restraint alone predicted a need for TTA with an odds ratios of 2.69 (P = .002). MC plus improper restraint predicted the need for TTA with an odds ratio of 2.52 (P = .002). In contrast, the odds ratio when using MC alone was 1.65 (P = .16). When the 5 MC were evaluated individually as predictive of TTA, ejection, death of occupant, and intrusion more than 18 inches were statistically significant. Improper restraint is an independent predictor of necessitating TTA in this single-institution study. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    Science.gov (United States)

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  14. Improved survival prediction from lung function data in a large population sample

    DEFF Research Database (Denmark)

    Miller, M.R.; Pedersen, O.F.; Lange, P.

    2008-01-01

    Studies relating tung function to survival commonly express lung function impairment as a percent of predicted but this retains age, height and sex bias. We have studied alternative methods of expressing forced expiratory volume in 1 s (FEV1) for predicting all cause and airway related lung disease.......1 respectively. Cut levels of lung function were used to categorise impairment and the HR for multivariate prediction of all cause and airway related lung disease mortality were 10 and 2044 respectively for the worst category of FEV1/ht(2) compared to 5 and 194 respectively for the worst category of FEV1PP....... In univariate predictions of all cause mortality the HR for FEV1/ht(2) categories was 2-4 times higher than those for FEV1PP and 3-10 times higher for airway related tung disease mortality. We conclude that FEV1/ht(2) is superior to FEV1PP for predicting survival. in a general population and this method...

  15. Prediction of Negative Conversion Days of Childhood Nephrotic Syndrome Based on the Improved Backpropagation Neural Network with Momentum

    Directory of Open Access Journals (Sweden)

    Yi-jun Liu

    2015-12-01

    Full Text Available Childhood nephrotic syndrome is a chronic disease harmful to growth of children. Scientific and accurate prediction of negative conversion days for children with nephrotic syndrome offers potential benefits for treatment of patients and helps achieve better cure effect. In this study, the improved backpropagation neural network with momentum is used for prediction. Momentum speeds up convergence and maintains the generalization performance of the neural network, and therefore overcomes weaknesses of the standard backpropagation algorithm. The three-tier network structure is constructed. Eight indicators including age, lgG, lgA and lgM, etc. are selected for network inputs. The scientific computing software of MATLAB and its neural network tools are used to create model and predict. The training sample of twenty-eight cases is used to train the neural network. The test sample of six typical cases belonging to six different age groups respectively is used to test the predictive model. The low mean absolute error of predictive results is achieved at 0.83. The experimental results of the small-size sample show that the proposed approach is to some degree applicable for the prediction of negative conversion days of childhood nephrotic syndrome.

  16. When bad stress goes good: increased threat reactivity predicts improved category learning performance.

    Science.gov (United States)

    Ell, Shawn W; Cosley, Brandon; McCoy, Shannon K

    2011-02-01

    The way in which we respond to everyday stressors can have a profound impact on cognitive functioning. Maladaptive stress responses in particular are generally associated with impaired cognitive performance. We argue, however, that the cognitive system mediating task performance is also a critical determinant of the stress-cognition relationship. Consistent with this prediction, we observed that stress reactivity consistent with a maladaptive, threat response differentially predicted performance on two categorization tasks. Increased threat reactivity predicted enhanced performance on an information-integration task (i.e., learning is thought to depend upon a procedural-based memory system), and a (nonsignificant) trend for impaired performance on a rule-based task (i.e., learning is thought to depend upon a hypothesis-testing system). These data suggest that it is critical to consider both variability in the stress response and variability in the cognitive system mediating task performance in order to fully understand the stress-cognition relationship.

  17. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    Science.gov (United States)

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  18. Predicting Improvement in Writer's Cramp Symptoms following Botulinum Neurotoxin Injection Therapy

    Directory of Open Access Journals (Sweden)

    Mallory Jackman

    2016-09-01

    Full Text Available Introduction: Writer's cramp is a specific focal hand dystonia causing abnormal posturing and tremor in the upper limb. The most popular medical intervention, botulinum neurotoxin type A (BoNT-A therapy, is variably effective for 50–70% of patients. BoNT-A non-responders undergo ineffective treatment and may experience significant side effects. Various assessments have been used to determine response prediction to BoNT-A, but not in the same population of patients. Methods: A comprehensive assessment was employed to measure various symptom aspects. Clinical scales, full upper-limb kinematic measures, self-report, and task performance measures were assessed for nine writer's cramp patients at baseline. Patients received two BoNT-A injections then were classified as responders or non-responders based on a quantified self-report measure. Baseline scores were compared between groups, across all measures, to determine which scores predicted a positive BoNT-A response. Results: Five of nine patients were responders. No kinematic measures were predictably different between groups. Analyses revealed three features that predicted a favorable response and separated the two groups: higher than average cramp severity and cramp frequency, and below average cramp latency. Discussion: Non-kinematic measures appear to be superior in making such predictions. Specifically, measures of cramp severity, frequency, and latency during performance of a specific set of writing and drawing tasks were predictive factors. Since kinematic was not used to determine the injection pattern and the injections were visually guided, it may still be possible to use individual patient kinematics for better outcomes. 

  19. Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lundegaard, Claus; Worning, Peder

    2004-01-01

    Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus......, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates...

  20. Improving prediction of Alzheimer’s disease using patterns of cortical thinning and homogenizing images according to disease stage

    DEFF Research Database (Denmark)

    Eskildsen, Simon Fristed; Coupé, Pierrick; García-Lorenzo, Daniel

    Predicting Alzheimer’s disease (AD) in individuals with some symptoms of cognitive decline may have great influence on treatment choice and guide subject selection in trials on disease modifying drugs. Structural MRI has the potential of revealing early signs of neurodegeneration in the human brain...... and may thus aid in predicting and diagnosing AD. Surface-based cortical thickness measurements from T1-weighted MRI have demonstrated high sensitivity to cortical gray matter changes. In this study, we investigated the possibility of using patterns of cortical thickness measurements for predicting AD...... of conversion from MCI to AD can be improved by learning the atrophy patterns that are specific to the different stages of disease progression. This has the potential to guide the further development of imaging biomarkers in AD....

  1. Optimization approach of background value and initial item for improving prediction precision of GM(1,1) model

    Institute of Scientific and Technical Information of China (English)

    Yuhong Wang; Qin Liu; Jianrong Tang; Wenbin Cao; Xiaozhong Li

    2014-01-01

    A combination method of optimization of the back-ground value and optimization of the initial item is proposed. The sequences of the unbiased exponential distribution are simulated and predicted through the optimization of the background value in grey differential equations. The principle of the new information priority in the grey system theory and the rationality of the initial item in the original GM(1,1) model are ful y expressed through the improvement of the initial item in the proposed time response function. A numerical example is employed to il ustrate that the proposed method is able to simulate and predict sequences of raw data with the unbiased exponential distribution and has better simulation performance and prediction precision than the original GM(1,1) model relatively.

  2. Strain Echocardiography Improves Risk Prediction of Ventricular Arrhythmias After Myocardial Infarction

    DEFF Research Database (Denmark)

    Haugaa, Kristina H; Grenne, Bjørnar L; Eek, Christian H

    2013-01-01

    The aim of this study was to test the hypothesis that strain echocardiography might improve arrhythmic risk stratification in patients after myocardial infarction (MI).......The aim of this study was to test the hypothesis that strain echocardiography might improve arrhythmic risk stratification in patients after myocardial infarction (MI)....

  3. CRISPR-Cas9-mediated saturated mutagenesis screen predicts clinical drug resistance with improved accuracy.

    Science.gov (United States)

    Ma, Leyuan; Boucher, Jeffrey I; Paulsen, Janet; Matuszewski, Sebastian; Eide, Christopher A; Ou, Jianhong; Eickelberg, Garrett; Press, Richard D; Zhu, Lihua Julie; Druker, Brian J; Branford, Susan; Wolfe, Scot A; Jensen, Jeffrey D; Schiffer, Celia A; Green, Michael R; Bolon, Daniel N

    2017-10-31

    Developing tools to accurately predict the clinical prevalence of drug-resistant mutations is a key step toward generating more effective therapeutics. Here we describe a high-throughput CRISPR-Cas9-based saturated mutagenesis approach to generate comprehensive libraries of point mutations at a defined genomic location and systematically study their effect on cell growth. As proof of concept, we mutagenized a selected region within the leukemic oncogene BCR-ABL1 Using bulk competitions with a deep-sequencing readout, we analyzed hundreds of mutations under multiple drug conditions and found that the effects of mutations on growth in the presence or absence of drug were critical for predicting clinically relevant resistant mutations, many of which were cancer adaptive in the absence of drug pressure. Using this approach, we identified all clinically isolated BCR-ABL1 mutations and achieved a prediction score that correlated highly with their clinical prevalence. The strategy described here can be broadly applied to a variety of oncogenes to predict patient mutations and evaluate resistance susceptibility in the development of new therapeutics. Published under the PNAS license.

  4. An improved liquid film model to predict the CHF based on the influence of churn flow

    International Nuclear Information System (INIS)

    Wang, Ke; Bai, Bofeng; Ma, Weimin

    2014-01-01

    The critical heat flux (CHF) for boiling crisis is one of the most important parameters in thermal management and safe operation of many engineering systems. Traditionally, the liquid film flow model for “dryout” mechanism shows a good prediction in heated annular two-phase flow. However, a general assumption that the initial entrained fraction at the onset of annular flow shows a lack of reasonable physical interpretation. Since the droplets have great momentum and the length of churn flow is short, the droplets in churn flow show an inevitable effect on the downstream annular flow. To address this, we considered the effect of churn flow and developed the original liquid film flow model in vertical upward flow by suggesting that calculation starts from the onset of churn flow rather than annular flow. The results indicated satisfactory predictions with the experimental data and the developed model provided a better understanding about the effect of flow pattern on the CHF prediction. - Highlights: •The general assumption of initial entrained fraction is unreasonable. •The droplets in churn flow show an inevitable effect on downstream annular flow. •The original liquid film flow model for prediction of CHF was developed. •The integration process was modified to start from the onset of churn flow

  5. Increased tumour ADC value during chemotherapy predicts improved survival in unresectable pancreatic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Nishiofuku, Hideyuki; Tanaka, Toshihiro; Kichikawa, Kimihiko [Nara Medical University, Department of Radiology and IVR Center, Kashihara-city, Nara (Japan); Marugami, Nagaaki [Nara Medical University, Department of Endoscopy and Ultrasound, Kashihara-city, Nara (Japan); Sho, Masayuki; Akahori, Takahiro; Nakajima, Yoshiyuki [Nara Medical University, Department of Surgery, Kashihara-city, Nara (Japan)

    2016-06-15

    To investigate whether changes to the apparent diffusion coefficient (ADC) of primary tumour in the early period after starting chemotherapy can predict progression-free survival (PFS) or overall survival (OS) in patients with unresectable pancreatic adenocarcinoma. Subjects comprised 43 patients with histologically confirmed unresectable pancreatic cancer treated with first-line chemotherapy. Minimum ADC values in primary tumour were measured using the selected area ADC (sADC), which excluded cystic and necrotic areas and vessels, and the whole tumour ADC (wADC), which included whole tumour components. Relative changes in ADC were calculated from baseline to 4 weeks after initiation of chemotherapy. Relationships between ADC and both PFS and OS were modelled by Cox proportional hazards regression. Median PFS and OS were 6.1 and 11.0 months, respectively. In multivariate analysis, sADC change was the strongest predictor of PFS (hazard ratio (HR), 4.5; 95 % confidence interval (CI), 1.7-11.9; p = 0.002). Multivariate Cox regression analysis for OS revealed sADC change and CRP as independent predictive markers, with sADC change as the strongest predictive biomarker (HR, 6.7; 95 % CI, 2.7-16.6; p = 0.001). Relative changes in sADC could provide a useful imaging biomarker to predict PFS and OS with chemotherapy for unresectable pancreatic adenocarcinoma. (orig.)

  6. Some aspects to improve sound insulation prediction models for lightweight elements

    NARCIS (Netherlands)

    Gerretsen, E.

    2007-01-01

    The best approach to include lightweight building elements in prediction models for airborne and impact sound insulation between rooms, as in EN 12354, is not yet completely clear. Two aspects are at least of importance, i.e. to derive the sound reduction index R for lightweight elements for

  7. Use of Repeated Blood Pressure and Cholesterol Measurements to Improve Cardiovascular Disease Risk Prediction

    DEFF Research Database (Denmark)

    Paige, Ellie; Barrett, Jessica; Pennells, Lisa

    2017-01-01

    The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data...

  8. Machine learning and hurdle models for improving regional predictions of stream water acid neutralizing capacity

    Science.gov (United States)

    Nicholas A. Povak; Paul F. Hessburg; Keith M. Reynolds; Timothy J. Sullivan; Todd C. McDonnell; R. Brion Salter

    2013-01-01

    In many industrialized regions of the world, atmospherically deposited sulfur derived from industrial, nonpoint air pollution sources reduces stream water quality and results in acidic conditions that threaten aquatic resources. Accurate maps of predicted stream water acidity are an essential aid to managers who must identify acid-sensitive streams, potentially...

  9. An improved robust model predictive control for linear parameter-varying input-output models

    NARCIS (Netherlands)

    Abbas, H.S.; Hanema, J.; Tóth, R.; Mohammadpour, J.; Meskin, N.

    2018-01-01

    This paper describes a new robust model predictive control (MPC) scheme to control the discrete-time linear parameter-varying input-output models subject to input and output constraints. Closed-loop asymptotic stability is guaranteed by including a quadratic terminal cost and an ellipsoidal terminal

  10. Individual Differences in Executive Functioning Predict Preschoolers' Improvement from Theory-of-Mind Training

    Science.gov (United States)

    Benson, Jeannette E.; Sabbagh, Mark A.; Carlson, Stephanie M.; Zelazo, Philip David

    2013-01-01

    Twenty-four 3.5-year-old children who initially showed poor performance on false-belief tasks participated in a training protocol designed to promote performance on these tasks. Our aim was to determine whether the extent to which children benefited from training was predicted by their performance on a battery of executive functioning tasks.…

  11. Accuracy of eosinophils and eosinophil cationic protein to predict steroid improvement in asthma

    NARCIS (Netherlands)

    Meijer, RJ; Postma, DS; Kauffman, HF; Arends, LR; Koeter, GH; Kerstjens, HAM

    Background There is a large variability in clinical response to corticosteroid treatment in patients with asthma. Several markers of inflammation like eosinophils and eosinophil cationic protein (ECP), as well as exhaled nitric oxide (NO), are good candidates to predict clinical response. Aim We

  12. Land-surface initialisation improves seasonal climate prediction skill for maize yield forecast.

    Science.gov (United States)

    Ceglar, Andrej; Toreti, Andrea; Prodhomme, Chloe; Zampieri, Matteo; Turco, Marco; Doblas-Reyes, Francisco J

    2018-01-22

    Seasonal crop yield forecasting represents an important source of information to maintain market stability, minimise socio-economic impacts of crop losses and guarantee humanitarian food assistance, while it fosters the use of climate information favouring adaptation strategies. As climate variability and extremes have significant influence on agricultural production, the early prediction of severe weather events and unfavourable conditions can contribute to the mitigation of adverse effects. Seasonal climate forecasts provide additional value for agricultural applications in several regions of the world. However, they currently play a very limited role in supporting agricultural decisions in Europe, mainly due to the poor skill of relevant surface variables. Here we show how a combined stress index (CSI), considering both drought and heat stress in summer, can predict maize yield in Europe and how land-surface initialised seasonal climate forecasts can be used to predict it. The CSI explains on average nearly 53% of the inter-annual maize yield variability under observed climate conditions and shows how concurrent heat stress and drought events have influenced recent yield anomalies. Seasonal climate forecast initialised with realistic land-surface achieves better (and marginally useful) skill in predicting the CSI than with climatological land-surface initialisation in south-eastern Europe, part of central Europe, France and Italy.

  13. Energy optimization and prediction of complex petrochemical industries using an improved artificial neural network approach integrating data envelopment analysis

    International Nuclear Information System (INIS)

    Han, Yong-Ming; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-01-01

    Graphical abstract: This paper proposed an energy optimization and prediction of complex petrochemical industries based on a DEA-integrated ANN approach (DEA-ANN). The proposed approach utilizes the DEA model with slack variables for sensitivity analysis to determine the effective decision making units (DMUs) and indicate the optimized direction of the ineffective DMUs. Compared with the traditional ANN approach, the DEA-ANN prediction model is effectively verified by executing a linear comparison between all DMUs and the effective DMUs through the standard data source from the UCI (University of California at Irvine) repository. Finally, the proposed model is validated through an application in a complex ethylene production system of China petrochemical industry. Meanwhile, the optimization result and the prediction value are obtained to reduce energy consumption of the ethylene production system, guide ethylene production and improve energy efficiency. - Highlights: • The DEA-integrated ANN approach is proposed. • The DEA-ANN prediction model is effectively verified through the standard data source from the UCI repository. • The energy optimization and prediction framework of complex petrochemical industries based on the proposed method is obtained. • The proposed method is valid and efficient in improvement of energy efficiency in complex petrochemical plants. - Abstract: Since the complex petrochemical data have characteristics of multi-dimension, uncertainty and noise, it is difficult to accurately optimize and predict the energy usage of complex petrochemical systems. Therefore, this paper proposes a data envelopment analysis (DEA) integrated artificial neural network (ANN) approach (DEA-ANN). The proposed approach utilizes the DEA model with slack variables for sensitivity analysis to determine the effective decision making units (DMUs) and indicate the optimized direction of the ineffective DMUs. Compared with the traditional ANN approach, the DEA

  14. Improved therapy-success prediction with GSS estimated from clinical HIV-1 sequences.

    Science.gov (United States)

    Pironti, Alejandro; Pfeifer, Nico; Kaiser, Rolf; Walter, Hauke; Lengauer, Thomas

    2014-01-01

    Rules-based HIV-1 drug-resistance interpretation (DRI) systems disregard many amino-acid positions of the drug's target protein. The aims of this study are (1) the development of a drug-resistance interpretation system that is based on HIV-1 sequences from clinical practice rather than hard-to-get phenotypes, and (2) the assessment of the benefit of taking all available amino-acid positions into account for DRI. A dataset containing 34,934 therapy-naïve and 30,520 drug-exposed HIV-1 pol sequences with treatment history was extracted from the EuResist database and the Los Alamos National Laboratory database. 2,550 therapy-change-episode baseline sequences (TCEB) were assigned to test set A. Test set B contains 1,084 TCEB from the HIVdb TCE repository. Sequences from patients absent in the test sets were used to train three linear support vector machines to produce scores that predict drug exposure pertaining to each of 20 antiretrovirals: the first one uses the full amino-acid sequences (DEfull), the second one only considers IAS drug-resistance positions (DEonlyIAS), and the third one disregards IAS drug-resistance positions (DEnoIAS). For performance comparison, test sets A and B were evaluated with DEfull, DEnoIAS, DEonlyIAS, geno2pheno[resistance], HIVdb, ANRS, HIV-GRADE, and REGA. Clinically-validated cut-offs were used to convert the continuous output of the first four methods into susceptible-intermediate-resistant (SIR) predictions. With each method, a genetic susceptibility score (GSS) was calculated for each therapy episode in each test set by converting the SIR prediction for its compounds to integer: S=2, I=1, and R=0. The GSS were used to predict therapy success as defined by the EuResist standard datum definition. Statistical significance was assessed using a Wilcoxon signed-rank test. A comparison of the therapy-success prediction performances among the different interpretation systems for test set A can be found in Table 1, while those for test set

  15. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    Science.gov (United States)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in

  16. Review of the quality of total mesorectal excision does not improve the prediction of outcome.

    Science.gov (United States)

    Demetter, P; Jouret-Mourin, A; Silversmit, G; Vandendael, T; Sempoux, C; Hoorens, A; Nagy, N; Cuvelier, C; Van Damme, N; Penninckx, F

    2016-09-01

    A fair to moderate concordance in grading of the total mesorectal excision (TME) surgical specimen by local pathologists and a central review panel has been observed in the PROCARE (Project on Cancer of the Rectum) project. The aim of the present study was to evaluate the difference, if any, in the accuracy of predicting the oncological outcome through TME grading by local pathologists or by the review panel. The quality of the TME specimen was reviewed for 482 surgical specimens registered on a prospective database between 2006 and 2011. Patients with a Stage IV tumour, with unknown incidence date or without follow-up information were excluded, resulting in a study population of 383 patients. Quality assessment of the specimen was based on three grades including mesorectal resection (MRR), intramesorectal resection (IMR) and muscularis propria resection (MPR). Using univariable Cox regression models, local and review panel histopathological gradings of the quality of TME were assessed as predictors of local recurrence, distant metastasis and disease-free and overall survival. Differences in the predictions between local and review grading were determined. Resection planes were concordant in 215 (56.1%) specimens. Downgrading from MRR to MPR was noted in 23 (6.0%). There were no significant differences in the prediction error between the two models; local and central review TME grading predicted the outcome equally well. Any difference in grading of the TME specimen between local histopathologists and the review panel had no significant impact on the prediction of oncological outcome for this patient cohort. Grading of the quality of TME as reported by local histopathologists can therefore be used for outcome analysis. Quality control of TME grading is not warranted provided the histopathologist is adequately trained. Colorectal Disease © 2016 The Association of Coloproctology of Great Britain and Ireland.

  17. Gene network inherent in genomic big data improves the accuracy of prognostic prediction for cancer patients.

    Science.gov (United States)

    Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock

    2017-09-29

    Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients.

  18. Predicted versus observed cosmic-ray-produced noble gases in lunar samples: improved Kr production ratios

    International Nuclear Information System (INIS)

    Regnier, S.; Hohenberg, C.M.; Marti, K.; Reedy, R.C.

    1979-01-01

    New sets of cross sections for the production of krypton isotopes from targets of Rb, Sr, Y, and Zr were constructed primarily on the bases of experimental excitation functions for Kr production from Y. These cross sections were used to calculate galactic-cosmic-ray and solar-proton production rates for Kr isotopes in the moon. Spallation Kr data obtained from ilmenite separates of rocks 10017 and 10047 are reported. Production rates and isotopic ratios for cosmogenic Kr observed in ten well-documented lunar samples and in ilmenite separates and bulk samples from several lunar rocks with long but unknown irradiation histories were compared with predicted rates and ratios. The agreements were generally quite good. Erosion of rock surfaces affected rates or ratios for only near-surface samples, where solar-proton production is important. There were considerable spreads in predicted-to-observed production rates of 83 Kr, due at least in part to uncertainties in chemical abundances. The 78 Kr/ 83 Kr ratios were predicted quite well for samples with a wide range of Zr/Sr abundance ratios. The calculated 80 Kr/ 83 Kr ratios were greater than the observed ratios when production by the 79 Br(n,γ) reaction was included, but were slightly undercalculated if the Br reaction was omitted; these results suggest that Br(n,γ)-produced Kr is not retained well by lunar rocks. The productions of 81 Kr and 82 Kr were overcalculated by approximately 10% relative to 83 Kr. Predicted-to-observed 84 Kr/ 83 ratios scattered considerably, possibly because of uncertainties in corrections for trapped and fission components and in cross sections for 84 Kr production. Most predicted 84 Kr and 86 Kr production rates were lower than observed. Shielding depths of several Apollo 11 rocks were determined from the measured 78 Kr/ 83 Kr ratios of ilmenite separates. 4 figures, 5 tables

  19. Improvement of cardiovascular risk prediction: time to review current knowledge, debates, and fundamentals on how to assess test characteristics.

    Science.gov (United States)

    Romanens, Michel; Ackermann, Franz; Spence, John David; Darioli, Roger; Rodondi, Nicolas; Corti, Roberto; Noll, Georg; Schwenkglenks, Matthias; Pencina, Michael

    2010-02-01

    Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.

  20. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  1. Aerodynamic noise prediction of a Horizontal Axis Wind Turbine using Improved Delayed Detached Eddy Simulation and acoustic analogy

    International Nuclear Information System (INIS)

    Ghasemian, Masoud; Nejat, Amir

    2015-01-01

    Highlights: • The noise predictions are performed by Ffowcs Williams and Hawkings method. • There is a direct relation between the radiated noise and the wind speed. • The tonal peaks in the sound spectra match with the blade passing frequency. • The quadrupole noises have negligible effect on the low frequency noises. - Abstract: This paper presents the results of the aerodynamic and aero-acoustic prediction of the flow field around the National Renewable Energy Laboratory Phase VI wind turbine. The Improved Delayed Detached Eddy Simulation turbulence model is applied to obtain the instantaneous turbulent flow field. The noise prediction is carried out using the Ffowcs Williams and Hawkings acoustic analogy. Simulations are performed for three different inflow conditions, U = 7, 10, 15 m/s. The capability of the Improved Delayed Detached Eddy Simulation turbulence model in massive separation is verified with available experimental data for pressure coefficient. The broadband noises of the turbulent boundary layers and the tonal noises due to the blade passing frequency are predicted via flow field noise simulation. The contribution of the thickness, loading and quadrupole noises are investigated, separately. The results indicated that there is a direct relation between the strength of the radiated noise and the wind speed. Furthermore, the effect of the receiver location on the Overall Sound Pressure Level is investigated

  2. The prediction of resting energy expenditure in type 2 diabetes mellitus is improved by factoring for glycemia.

    Science.gov (United States)

    Gougeon, R; Lamarche, M; Yale, J-F; Venuta, T

    2002-12-01

    Predictive equations have been reported to overestimate resting energy expenditure (REE) for obese persons. The presence of hyperglycemia results in elevated REE in obese persons with type 2 diabetes, and its effect on the validity of these equations is unknown. We tested whether (1) indicators of diabetes control were independent associates of REE in type 2 diabetes and (2) their inclusion would improve predictive equations. A cross-sectional study of 65 (25 men, 40 women) obese type 2 diabetic subjects. Variables measured were: REE by ventilated-hood indirect calorimetry, body composition by bioimpedance analysis, body circumferences, fasting plasma glucose (FPG) and hemoglobin A(1c). Data were analyzed using stepwise multiple linear regression. REE, corrected for weight, fat-free mass, age and gender, was significantly greater with FPG>10 mmol/l (P=0.017) and correlated with FPG (P=0.013) and hemoglobin A(1c) as percentage upper limit of normal (P=0.02). Weight was the main determinant of REE. Together with hip circumference and FPG, it explained 81% of the variation. FPG improved the predictability of the equation by >3%. With poor glycemic control, it can represent an increase in REE of up to 8%. Our data indicate that in a population of obese subjects with type 2 diabetes mellitus, REE is better predicted when fasting plasma glucose is included as a variable.

  3. Prediction of improvement in skin fibrosis in diffuse cutaneous systemic sclerosis: a EUSTAR analysis.

    Science.gov (United States)

    Dobrota, Rucsandra; Maurer, Britta; Graf, Nicole; Jordan, Suzana; Mihai, Carina; Kowal-Bielecka, Otylia; Allanore, Yannick; Distler, Oliver

    2016-10-01

    Improvement of skin fibrosis is part of the natural course of diffuse cutaneous systemic sclerosis (dcSSc). Recognising those patients most likely to improve could help tailoring clinical management and cohort enrichment for clinical trials. In this study, we aimed to identify predictors for improvement of skin fibrosis in patients with dcSSc. We performed a longitudinal analysis of the European Scleroderma Trials And Research (EUSTAR) registry including patients with dcSSc, fulfilling American College of Rheumatology criteria, baseline modified Rodnan skin score (mRSS) ≥7 and follow-up mRSS at 12±2 months. The primary outcome was skin improvement (decrease in mRSS of >5 points and ≥25%) at 1 year follow-up. A respective increase in mRSS was considered progression. Candidate predictors for skin improvement were selected by expert opinion and logistic regression with bootstrap validation was applied. From the 919 patients included, 218 (24%) improved and 95 (10%) progressed. Eleven candidate predictors for skin improvement were analysed. The final model identified high baseline mRSS and absence of tendon friction rubs as independent predictors of skin improvement. The baseline mRSS was the strongest predictor of skin improvement, independent of disease duration. An upper threshold between 18 and 25 performed best in enriching for progressors over regressors. Patients with advanced skin fibrosis at baseline and absence of tendon friction rubs are more likely to regress in the next year than patients with milder skin fibrosis. These evidence-based data can be implemented in clinical trial design to minimise the inclusion of patients who would regress under standard of care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. FDG-PET/CT in the prediction of pulmonary function improvement in nonspecific interstitial pneumonia. A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Jacquelin, V. [AP-HP, Hosp. Avicenne, Department of Nuclear Medicine, Bobigny (France); Mekinian, A. [AP-HP, Hosp. Saint-Antoine, Department of Internal Medicine and Inflammation-Immunopathology-Biotherapy Department (DHU i2B), Paris (France); Brillet, P.Y. [AP-HP, Hosp. Avicenne, Department of Radiology, Bobigny (France); Univ. Paris 13, Sorbonne Paris Cité, Bobigny (France); Nunes, H. [AP-HP, Hosp. Avicenne, Department of Pneumology, Bobigny (France); Univ. Paris 13, Sorbonne Paris Cité, Bobigny (France); Fain, O. [AP-HP, Hosp. Saint-Antoine, Department of Internal Medicine and Inflammation-Immunopathology-Biotherapy Department (DHU i2B), Paris (France); Valeyre, D. [AP-HP, Hosp. Avicenne, Department of Pneumology, Bobigny (France); Univ. Paris 13, Sorbonne Paris Cité, Bobigny (France); Soussan, M., E-mail: michael.soussan@aphp.fr [AP-HP, Hosp. Avicenne, Department of Nuclear Medicine, Bobigny (France); Univ. Paris 13, Sorbonne Paris Cité, Bobigny (France)

    2016-12-15

    Purpose: Our study aimed to analyse the characteristics of nonspecific interstitial pneumonia (NSIP) using FDG-PET/CT (PET) and to evaluate its ability to predict the therapeutic response. Procedures: Eighteen NSIP patients were included. Maximum standardized uptake value (SUV{sub max}), FDG uptake extent (in percentage of lung volume), high resolution CT scan (HRCT) elementary lesions, and HRCT fibrosis score were recorded. The predictive value of the parameters for lung function improvement was evaluated using logistic regression and Receiver Operating Characteristic (ROC) curve analysis (n = 13/18). Results: All patients had an increased pulmonary FDG uptake (median SUV{sub max} = 3.1 [2–7.6]), with a median extent of 19% [6–67]. Consolidations, ground-glass opacities, honeycombing and reticulations showed uptake in 90%, 89%, 85% and 76%, respectively. FDG uptake extent was associated with improvement of pulmonary function under treatment (increase in forced vital capacity > 10%, p = 0.03), whereas SUV{sub max} and HRCT fibrosis score were not (p > 0.5). For FDG uptake extent, ROC analysis showed an area under the curve at 0.85 ± 0.11 and sensitivity/specificity was 88%/80% for a threshold fixed at 21%. Conclusions: Increased FDG uptake was observed in all NSIP patients, both in inflammatory and fibrotic HRCT lesions. The quantification of FDG uptake extent might be useful to predict functional improvement under treatment.

  5. FDG-PET/CT in the prediction of pulmonary function improvement in nonspecific interstitial pneumonia. A Pilot Study

    International Nuclear Information System (INIS)

    Jacquelin, V.; Mekinian, A.; Brillet, P.Y.; Nunes, H.; Fain, O.; Valeyre, D.; Soussan, M.

    2016-01-01

    Purpose: Our study aimed to analyse the characteristics of nonspecific interstitial pneumonia (NSIP) using FDG-PET/CT (PET) and to evaluate its ability to predict the therapeutic response. Procedures: Eighteen NSIP patients were included. Maximum standardized uptake value (SUV max ), FDG uptake extent (in percentage of lung volume), high resolution CT scan (HRCT) elementary lesions, and HRCT fibrosis score were recorded. The predictive value of the parameters for lung function improvement was evaluated using logistic regression and Receiver Operating Characteristic (ROC) curve analysis (n = 13/18). Results: All patients had an increased pulmonary FDG uptake (median SUV max = 3.1 [2–7.6]), with a median extent of 19% [6–67]. Consolidations, ground-glass opacities, honeycombing and reticulations showed uptake in 90%, 89%, 85% and 76%, respectively. FDG uptake extent was associated with improvement of pulmonary function under treatment (increase in forced vital capacity > 10%, p = 0.03), whereas SUV max and HRCT fibrosis score were not (p > 0.5). For FDG uptake extent, ROC analysis showed an area under the curve at 0.85 ± 0.11 and sensitivity/specificity was 88%/80% for a threshold fixed at 21%. Conclusions: Increased FDG uptake was observed in all NSIP patients, both in inflammatory and fibrotic HRCT lesions. The quantification of FDG uptake extent might be useful to predict functional improvement under treatment.

  6. Global Optimization of Ventricular Myocyte Model to Multi-Variable Objective Improves Predictions of Drug-Induced Torsades de Pointes

    Directory of Open Access Journals (Sweden)

    Trine Krogh-Madsen

    2017-12-01

    Full Text Available In silico cardiac myocyte models present powerful tools for drug safety testing and for predicting phenotypical consequences of ion channel mutations, but their accuracy is sometimes limited. For example, several models describing human ventricular electrophysiology perform poorly when simulating effects of long QT mutations. Model optimization represents one way of obtaining models with stronger predictive power. Using a recent human ventricular myocyte model, we demonstrate that model optimization to clinical long QT data, in conjunction with physiologically-based bounds on intracellular calcium and sodium concentrations, better constrains model parameters. To determine if the model optimized to congenital long QT data better predicts risk of drug-induced long QT arrhythmogenesis, in particular Torsades de Pointes risk, we tested the optimized model against a database of known arrhythmogenic and non-arrhythmogenic ion channel blockers. When doing so, the optimized model provided an improved risk assessment. In particular, we demonstrate an elimination of false-positive outcomes generated by the baseline model, in which simulations of non-torsadogenic drugs, in particular verapamil, predict action potential prolongation. Our results underscore the importance of currents beyond those directly impacted by a drug block in determining torsadogenic risk. Our study also highlights the need for rich data in cardiac myocyte model optimization and substantiates such optimization as a method to generate models with higher accuracy of predictions of drug-induced cardiotoxicity.

  7. Missing Value Imputation Improves Mortality Risk Prediction Following Cardiac Surgery: An Investigation of an Australian Patient Cohort.

    Science.gov (United States)

    Karim, Md Nazmul; Reid, Christopher M; Tran, Lavinia; Cochrane, Andrew; Billah, Baki

    2017-03-01

    The aim of this study was to evaluate the impact of missing values on the prediction performance of the model predicting 30-day mortality following cardiac surgery as an example. Information from 83,309 eligible patients, who underwent cardiac surgery, recorded in the Australia and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) database registry between 2001 and 2014, was used. An existing 30-day mortality risk prediction model developed from ANZSCTS database was re-estimated using the complete cases (CC) analysis and using multiple imputation (MI) analysis. Agreement between the risks generated by the CC and MI analysis approaches was assessed by the Bland-Altman method. Performances of the two models were compared. One or more missing predictor variables were present in 15.8% of the patients in the dataset. The Bland-Altman plot demonstrated significant disagreement between the risk scores (prisk of mortality. Compared to CC analysis, MI analysis resulted in an average of 8.5% decrease in standard error, a measure of uncertainty. The MI model provided better prediction of mortality risk (observed: 2.69%; MI: 2.63% versus CC: 2.37%, Pvalues improved the 30-day mortality risk prediction following cardiac surgery. Copyright © 2016 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  8. Predicted channel types - Potential for Habitat Improvement in the Columbia River Basin

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Basin-wide analysis of potential to improve tributary habitats in the Columbia River basin through restoration of habitat-forming processes. Identification of...

  9. Carbon Sequestration in Dryland and Irrigated Agroecosystems: Quantification at Different Scales for Improved Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Verma, Shashi B. [Univ. of Nebraska, Lincoln, NE (United States); Cassman, Kenneth G. [Univ. of Nebraska, Lincoln, NE (United States); Arkebauer, Timothy J. [Univ. of Nebraska, Lincoln, NE (United States); Hubbard, Kenneth G. [Univ. of Nebraska, Lincoln, NE (United States); Knops, Johannes M. [Univ. of Nebraska, Lincoln, NE (United States); Suyker, Andrew E. [Univ. of Nebraska, Lincoln, NE (United States)

    2012-09-14

    The overall objective of this research is to improve our basic understanding of the biophysical processes that govern C sequestration in major rainfed and irrigated agroecosystems in the north-central USA.

  10. Airport Gate Activity Monitoring Tool Suite for Improved Turnaround Prediction, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this research is to create a suite of tools for monitoring airport gate activities with the objective of improving aircraft turnaround. Airport ramp...

  11. Predicted riparian vegetation - Potential for Habitat Improvement in the Columbia River Basin

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Basin-wide analysis of potential to improve tributary habitats in the Columbia River basin through restoration of habitat-forming processes. Identification of...

  12. Alternative research funding to improve clinical outcomes: model of prediction and prevention of sudden cardiac death.

    Science.gov (United States)

    Myerburg, Robert J; Ullmann, Steven G

    2015-04-01

    Although identification and management of cardiovascular risk markers have provided important population risk insights and public health benefits, individual risk prediction remains challenging. Using sudden cardiac death risk as a base case, the complex epidemiology of sudden cardiac death risk and the substantial new funding required to study individual risk are explored. Complex epidemiology derives from the multiple subgroups having different denominators and risk profiles, while funding limitations emerge from saturation of conventional sources of research funding without foreseeable opportunities for increases. A resolution to this problem would have to emerge from new sources of funding targeted to individual risk prediction. In this analysis, we explore the possibility of a research funding strategy that would offer business incentives to the insurance industries, while providing support for unresolved research goals. The model is developed for the case of sudden cardiac death risk, but the concept is applicable to other areas of the medical enterprise. © 2015 American Heart Association, Inc.

  13. Specific trauma subtypes improve the predictive validity of the Harvard Trauma Questionnaire in Iraqi refugees.

    Science.gov (United States)

    Arnetz, Bengt B; Broadbridge, Carissa L; Jamil, Hikmet; Lumley, Mark A; Pole, Nnamdi; Barkho, Evone; Fakhouri, Monty; Talia, Yousif Rofa; Arnetz, Judith E

    2014-12-01

    Trauma exposure contributes to poor mental health among refugees, and exposure often is m