WorldWideScience

Sample records for models predicted tg

  1. Synaptic alterations in the rTg4510 mouse model of tauopathy.

    Science.gov (United States)

    Kopeikina, Katherine J; Polydoro, Manuela; Tai, Hwan-Ching; Yaeger, Erich; Carlson, George A; Pitstick, Rose; Hyman, Bradley T; Spires-Jones, Tara L

    2013-04-15

    Synapse loss, rather than the hallmark amyloid-β (Aβ) plaques or tau-filled neurofibrillary tangles (NFT), is considered the most predictive pathological feature associated with cognitive status in the Alzheimer's disease (AD) brain. The role of Aβ in synapse loss is well established, but despite data linking tau to synaptic function, the role of tau in synapse loss remains largely undetermined. Here we test the hypothesis that human mutant P301L tau overexpression in a mouse model (rTg4510) will lead to age-dependent synaptic loss and dysfunction. Using array tomography and two methods of quantification (automated, threshold-based counting and a manual stereology-based technique) we demonstrate that overall synapse density is maintained in the neuropil, implicating synapse loss commensurate with the cortical atrophy known to occur in this model. Multiphoton in vivo imaging reveals close to 30% loss of apical dendritic spines of individual pyramidal neurons, suggesting these cells may be particularly vulnerable to tau-induced degeneration. Postmortem, we confirm the presence of tau in dendritic spines of rTg4510-YFP mouse brain by array tomography. These data implicate tau-induced loss of a subset of synapses that may be accompanied by compensatory increases in other synaptic subtypes, thereby preserving overall synapse density. Biochemical fractionation of synaptosomes from rTg4510 brain demonstrates a significant decrease in expression of several synaptic proteins, suggesting a functional deficit of remaining synapses in the rTg4510 brain. Together, these data show morphological and biochemical synaptic consequences in response to tau overexpression in the rTg4510 mouse model. Copyright © 2012 Wiley Periodicals, Inc.

  2. Ocular changes in TgF344-AD rat model of Alzheimer's disease.

    Science.gov (United States)

    Tsai, Yuchun; Lu, Bin; Ljubimov, Alexander V; Girman, Sergey; Ross-Cisneros, Fred N; Sadun, Alfredo A; Svendsen, Clive N; Cohen, Robert M; Wang, Shaomei

    2014-01-29

    Alzheimer's disease (AD) is the most common neurodegenerative disorder characterized by progressive decline in learning, memory, and executive functions. In addition to cognitive and behavioral deficits, vision disturbances have been reported in early stage of AD, well before the diagnosis is clearly established. To further investigate ocular abnormalities, a novel AD transgenic rat model was analyzed. Transgenic (Tg) rats (TgF344-AD) heterozygous for human mutant APPswe/PS1ΔE9 and age-matched wild type (WT) rats, as well as 20 human postmortem retinal samples from both AD and healthy donors were used. Visual function in the rodent was analyzed using the optokinetic response and luminance threshold recording from the superior colliculus. Immunohistochemistry on retinal and brain sections was used to detect various markers including amyloid-β (Aβ) plaques. As expected, Aβ plaques were detected in the hippocampus, cortex, and retina of Tg rats. Plaque-like structures were also found in two AD human whole-mount retinas. The choroidal thickness was significantly reduced in both Tg rat and in AD human eyes when compared with age-matched controls. Tg rat eyes also showed hypertrophic retinal pigment epithelial cells, inflammatory cells, and upregulation of complement factor C3. Although visual acuity was lower in Tg than in WT rats, there was no significant difference in the retinal ganglion cell number and retinal vasculature. In this study, we observed pathological changes in the choroid and in RPE cells in the TgF344-AD rat model; choroidal thinning was observed further in human AD retina. Along with Ab deposition, the inflammatory response was manifested by microglial recruitment and complement activation. Further studies are needed to elucidate the significance and mechanisms of these pathological changes [corrected].

  3. Characterization of cure in model photocrosslinking acrylate systems: Relationships among tensile properties, Tg and ultraviolet dose

    Energy Technology Data Exchange (ETDEWEB)

    Rakas, M.A. [Loctite Corp., Rocky Hill, CT (United States)

    1996-10-01

    The extent of cure of a thermosetting polymer is governed largely by polymerization kinetics and the difference between the polymerization temperature and the material`s ultimate glass transition temperature (Tg). For prepolymers which cure when exposed to ultraviolet (UV) radiation, other factors which strongly determine the extent of cure are the UV intensity and exposure time, and the interrelationship between the optical absorbance of the photoinitiator (PI) and the rate of formation of excited state PI radicals. Beers` Law can be used to understand the relationship between the PI`s molar absorptivity, its concentration, and adhesive film thickness. Many adhesives users are more concerned with bulk properties such as tensile modulus and Tg rather than a numerical measurement of degree of cure. Therefore, this research employed model acrylate formulations and determined changes in tensile properties and Tg as a function of film thickness and UV dose. These results enabled correlation of bulk and photoinitiator properties.

  4. Effects of repetitive exposure to anesthetics and analgesics in the Tg2576 mouse Alzheimer's model.

    Science.gov (United States)

    Quiroga, Carolina; Chaparro, Rafael E; Karlnoski, Rachel; Erasso, Diana; Gordon, Marcia; Morgan, David; Bosco, Gerardo; Rubini, Alessandro; Parmagnani, Andrea; Paoli, Antonio; Mangar, Devanand; Camporesi, Enrico M

    2014-11-01

    The use of anesthetics and sedatives has been suggested to be a contributor to Alzheimer's disease neuropathogenesis. We wanted to address the in vivo relevance of those substances in the Tg2576 Alzheimer's mouse model. Tg7526 mice were anesthesia-sedated for 90 min once a week for 4 weeks. Y maze, Congo Red, and amyloid beta (Aβ) immunochemistry were performed. We did not find any significant change in the navigation behavior of the exposed mice compared to the controls. Significantly less deposition of Aβ in the CA1 area of the hippocampus and frontal cortex of mice exposed to isoflurane, propofol, diazepam, ketamine, and pentobarbital was observed. In the dentate gyrus, Aβ deposition was significantly greater in the group treated with pentobarbital. Congo Red staining evidenced significantly fewer fibrils in the cortex of mice exposed to diazepam, ketamine, or pentobarbital. The adopted repetitive exposure did not cause a significant detriment in Tg7526 mouse.

  5. Cognitive impairment in the Tg6590 transgenic rat model of Alzheimer's disease

    DEFF Research Database (Denmark)

    Kloskowska, Ewa; Pham, Therese M; Nilsson, Tatjana;

    2010-01-01

    Recently, interest in the rat as an animal model of Alzheimer's disease (AD) has been growing. We have previously described the Tg6590 transgenic rat line expressing the amyloid precursor protein containing the Swedish AD mutation (K670M/N671L) that shows early stages of Abeta deposition...

  6. Ocular Changes in TgF344-AD Rat Model of Alzheimer's Disease

    OpenAIRE

    2014-01-01

    In this study, we observed pathological changes in the choroid and in RPE cells in the TgF344-AD rat model; choroidal thinning was further observed in human AD retina. Along with Aβ deposition, the inflammatory response was manifested by microglial recruitment and complement activation.

  7. Neuritin attenuates cognitive function impairments in tg2576 mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Yoori Choi

    Full Text Available Neuritin, also known as CPG15, is a neurotrophic factor that was initially discovered in a screen to identify genes involved in activity-dependent synaptic plasticity. Neuritin plays multiple roles in the process of neural development and synaptic plasticity, although its binding receptor(s and downstream signaling effectors remain unclear. In this study, we found that the cortical and hippocampal expression of neuritin is reduced in the brains of Alzheimer's disease (AD patients and demonstrated that viral-mediated expression of neuritin in the dentate gyrus of 13-month-old Tg2576 mice, an AD animal model, attenuated a deficit in learning and memory as assessed by a Morris water maze test. We also found that neuritin restored the reduction in dendritic spine density and the maturity of individual spines in primary hippocampal neuron cultures prepared from Tg2576 mice. It was also shown that viral-mediated expression of neuritin in the dentate gyrus of 7-week-old Sprague-Dawley rats increased neurogenesis in the hippocampus. Taken together, our results demonstrate that neuritin restores the reduction in dendritic spine density and the maturity of individual spines in primary hippocampal neurons from Tg2576 neurons, and also attenuates cognitive function deficits in Tg2576 mouse model of AD, suggesting that neuritin possesses a therapeutic potential for AD.

  8. Iron Chelation Inhibits Osteoclastic Differentiation In Vitro and in Tg2576 Mouse Model of Alzheimer's Disease.

    Directory of Open Access Journals (Sweden)

    Jun-Peng Guo

    Full Text Available Patients of Alzheimer's disease (AD frequently have lower bone mineral density and higher rate of hip fracture. Tg2576, a well characterized AD animal model that ubiquitously express Swedish mutant amyloid precursor protein (APPswe, displays not only AD-relevant neuropathology, but also age-dependent bone deficits. However, the underlying mechanisms remain poorly understood. As APP is implicated as a regulator of iron export, and the metal chelation is considered as a potential therapeutic strategy for AD, we examined iron chelation's effect on the osteoporotic deficit in Tg2576 mice. Remarkably, in vivo treatment with iron chelator, clinoquinol (CQ, increased both trabecular and cortical bone-mass, selectively in Tg2576, but not wild type (WT mice. Further in vitro studies showed that low concentrations of CQ as well as deferoxamine (DFO, another iron chelator, selectively inhibited osteoclast (OC differentiation, without an obvious effect on osteoblast (OB differentiation. Intriguingly, both CQ and DFO's inhibitory effect on OC was more potent in bone marrow macrophages (BMMs from Tg2576 mice than that of wild type controls. The reduction of intracellular iron levels in BMMs by CQ was also more dramatic in APPswe-expressing BMMs. Taken together, these results demonstrate a potent inhibition on OC formation and activation in APPswe-expressing BMMs by iron chelation, and reveal a potential therapeutic value of CQ in treating AD-associated osteoporotic deficits.

  9. Iron Chelation Inhibits Osteoclastic Differentiation In Vitro and in Tg2576 Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Guo, Jun-Peng; Pan, Jin-Xiu; Xiong, Lei; Xia, Wen-Fang; Cui, Shun; Xiong, Wen-Cheng

    2015-01-01

    Patients of Alzheimer's disease (AD) frequently have lower bone mineral density and higher rate of hip fracture. Tg2576, a well characterized AD animal model that ubiquitously express Swedish mutant amyloid precursor protein (APPswe), displays not only AD-relevant neuropathology, but also age-dependent bone deficits. However, the underlying mechanisms remain poorly understood. As APP is implicated as a regulator of iron export, and the metal chelation is considered as a potential therapeutic strategy for AD, we examined iron chelation's effect on the osteoporotic deficit in Tg2576 mice. Remarkably, in vivo treatment with iron chelator, clinoquinol (CQ), increased both trabecular and cortical bone-mass, selectively in Tg2576, but not wild type (WT) mice. Further in vitro studies showed that low concentrations of CQ as well as deferoxamine (DFO), another iron chelator, selectively inhibited osteoclast (OC) differentiation, without an obvious effect on osteoblast (OB) differentiation. Intriguingly, both CQ and DFO's inhibitory effect on OC was more potent in bone marrow macrophages (BMMs) from Tg2576 mice than that of wild type controls. The reduction of intracellular iron levels in BMMs by CQ was also more dramatic in APPswe-expressing BMMs. Taken together, these results demonstrate a potent inhibition on OC formation and activation in APPswe-expressing BMMs by iron chelation, and reveal a potential therapeutic value of CQ in treating AD-associated osteoporotic deficits.

  10. Chronic Hypertension Leads to Neurodegeneration in the TgSwDI Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Kruyer, Anna; Soplop, Nadine; Strickland, Sidney; Norris, Erin H

    2015-07-01

    Numerous epidemiological studies link vascular disorders, such as hypertension, diabetes mellitus, and stroke, with Alzheimer's disease (AD). Hypertension, specifically, is an important modifiable risk factor for late-onset AD. To examine the link between midlife hypertension and the onset of AD later in life, we chemically induced chronic hypertension in the TgSwDI mouse model of AD in early adulthood. Hypertension accelerated cognitive deficits in the Barnes maze test (Phypertension induced hippocampal neurodegeneration at an early age in this mouse line (43% reduction in the dorsal subiculum; P<0.05), establishing this as a useful research model of AD with mixed vascular and amyloid pathologies.

  11. Increased food intake leads to obesity and insulin resistance in the tg2576 Alzheimer's disease mouse model.

    Science.gov (United States)

    Kohjima, Motoyuki; Sun, Yuxiang; Chan, Lawrence

    2010-04-01

    Recent studies suggest that hyperinsulinemia and insulin resistance are linked to Alzheimer's disease (AD). In this study, we used Tg2576 transgenic (Tg) mice, a widely used transgenic mouse model for AD, to explore the relationship between increased amyloid beta-peptide (Abeta) and insulin resistance. When fed a high-fat diet (HFD), Tg mice developed obesity and insulin resistance at 16 wk of age. Furthermore, HFD-fed Tg mice displayed abnormal feeding behavior and increased caloric intake with time. Although caloric intake of HFD-fed Tg mice was similar to that of normal diet-fed Tg or wild-type mice during 4 to 8 wk of age, it increased sharply at 12 wk, and went up further at 16 wk, which paralleled changes in the level of Abeta40 and Abeta42 in the brain of these mice. Limiting food intake in HFD-fed Tg mice by pair-feeding a caloric intake identical with that of normal diet-fed mice completely prevented the obesity and insulin intolerance of HFD-fed Tg mice. The hypothalamus of HFD-fed Tg mice had a significant decrease in the expression of the anorexigenic neuropeptide, brain-derived neurotrophic factor, at both the mRNA and protein levels. These findings suggest that the increased Abeta in the brain of HFD-fed Tg2576 mice is associated with reduced brain-derived neurotrophic factor expression, which led to abnormal feeding behavior and increased food intake, resulting in obesity and insulin resistance in these animals.

  12. TG-43 U1 based dosimetric characterization of model 67-6520 Cs-137 brachytherapy source

    Energy Technology Data Exchange (ETDEWEB)

    Meigooni, Ali S.; Wright, Clarissa; Koona, Rafiq A.; Awan, Shahid B.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo [Department of Radiation Medicine, North Shore University Hospital, 300 Community Drive, Manhasset, New York 11030 and Department of Radiation Medicine, University of Kentucky Chandler Medical Center, Lexington, Kentucky 40536-0084 (United States); Department of Radiation Medicine, University of Kentucky Chandler Medical Center, Lexington, Kentucky 40536-0084 (United States); Department of Radiation Physics, ERESA, Hospital General Universitario, Avenida Tres Cruces, 2, E-46014 Valencia (Spain); Department of Oncology, Physics Section, ' ' La Fe' ' University Hospital, Avenida Campanar 21, E-46009 Valencia (Spain); Department of Atomic, Molecular and Nuclear Physics, University of Valencia, C/ Dr. Moliner 50, E-46100 Burjassot, Spain and Instituto de Fisica Corpuscular (IFIC), C/ Dr. Moliner 50, E-46100 Burjassot (Spain)

    2009-10-15

    Purpose: Brachytherapy treatment has been a cornerstone for management of various cancer sites, particularly for the treatment of gynecological malignancies. In low dose rate brachytherapy treatments, {sup 137}Cs sources have been used for several decades. A new {sup 137}Cs source design has been introduced (model 67-6520, source B3-561) by Isotope Products Laboratories (IPL) for clinical application. The goal of the present work is to implement the TG-43 U1 protocol in the characterization of the aforementioned {sup 137}Cs source. Methods: The dosimetric characteristics of the IPL {sup 137}Cs source are measured using LiF thermoluminescent dosimeters in a Solid Water phantom material and calculated using Monte Carlo simulations with the GEANT4 code in Solid Water and liquid water. The dose rate constant, radial dose function, and two-dimensional anisotropy function of this source model were obtained following the TG-43 U1 recommendations. In addition, the primary and scatter dose separation (PSS) formalism that could be used in convolution/superposition methods to calculate dose distributions around brachytherapy sources in heterogeneous media was studied. Results: The measured and calculated dose rate constants of the IPL {sup 137}Cs source in Solid Water were found to be 0.930({+-}7.3%) and 0.928({+-}2.6%) cGy h{sup -1} U{sup -1}, respectively. The agreement between these two methods was within our experimental uncertainties. The Monte Carlo calculated value in liquid water of the dose rate constant was {Lambda}=0.948({+-}2.6%) cGy h{sup -1} U{sup -1}. Similarly, the agreement between measured and calculated radial dose functions and the anisotropy functions was found to be within {+-}5%. In addition, the tabulated data that are required to characterize the source using the PSS formalism were derived. Conclusions: In this article the complete dosimetry of the newly designed {sup 137}Cs IPL source following the AAPM TG-43 U1 dosimetric protocol and the PSS

  13. Increased hippocampal excitability in the 3xTgAD mouse model for Alzheimer's disease in vivo.

    Directory of Open Access Journals (Sweden)

    Katherine E Davis

    Full Text Available Mouse Alzheimer's disease (AD models develop age- and region-specific pathology throughout the hippocampal formation. One recently established pathological correlate is an increase in hippocampal excitability in vivo. Hippocampal pathology also produces episodic memory decline in human AD and we have shown a similar episodic deficit in 3xTg AD model mice aged 3-6 months. Here, we tested whether hippocampal synaptic dysfunction accompanies this cognitive deficit by probing dorsal CA1 and DG synaptic responses in anaesthetized, 4-6 month-old 3xTgAD mice. As our previous reports highlighted a decline in episodic performance in aged control mice, we included aged cohorts for comparison. CA1 and DG responses to low-frequency perforant path stimulation were comparable between 3xTgAD and controls at both age ranges. As expected, DG recordings in controls showed paired-pulse depression; however, paired-pulse facilitation was observed in DG and CA1 of young and old 3xTgAD mice. During stimulus trains both short-latency (presumably monosynaptic: 'direct' and long-latency (presumably polysynaptic: 're-entrant' responses were observed. Facilitation of direct responses was modest in 3xTgAD animals. However, re-entrant responses in DG and CA1 of young 3xTgAD mice developed earlier in the stimulus train and with larger amplitude when compared to controls. Old mice showed less DG paired-pulse depression and no evidence for re-entrance. In summary, DG and CA1 responses to low-frequency stimulation in all groups were comparable, suggesting no loss of synaptic connectivity in 3xTgAD mice. However, higher-frequency activation revealed complex change in synaptic excitability in DG and CA1 of 3xTgAD mice. In particular, short-term plasticity in DG and CA1 was facilitated in 3xTgAD mice, most evidently in younger animals. In addition, re-entrance was facilitated in young 3xTgAD mice. Overall, these data suggest that the episodic-like memory deficit in 3xTgAD mice

  14. Environmental enrichment does not influence hypersynchronous network activity in the Tg2576 mouse model of Alzheimer's disease.

    Science.gov (United States)

    Bezzina, Charlotte; Verret, Laure; Halley, Hélène; Dahan, Lionel; Rampon, Claire

    2015-01-01

    The cognitive reserve hypothesis claims that the brain can overcome pathology by reinforcing preexistent processes or by developing alternative cognitive strategies. Epidemiological studies have revealed that this reserve can be built throughout life experiences as education or leisure activities. We previously showed that an early transient environmental enrichment (EE) durably improves memory performances in the Tg2576 mouse model of Alzheimer's disease (AD). Recently, we evidenced a hypersynchronous brain network activity in young adult Tg2576 mice. As aberrant oscillatory activity can contribute to memory deficits, we wondered whether the long-lasting memory improvements observed after EE were associated with a reduction of neuronal network hypersynchrony. Thus, we exposed non-transgenic (NTg) and Tg2576 mice to standard or enriched housing conditions for 10 weeks, starting at 3 months of age. Two weeks after EE period, Tg2576 mice presented similar seizure susceptibility to a GABA receptor antagonist. Immediately after and 2 weeks after this enrichment period, standard and enriched-housed Tg2576 mice did not differ with regards to the frequency of interictal spikes on their electroencephalographic (EEG) recordings. Thus, the long-lasting effect of this EE protocol on memory capacities in Tg2576 mice is not mediated by a reduction of their cerebral aberrant neuronal activity at early ages.

  15. Environmental enrichment does not influence hypersynchronous network activity in the Tg2576 mouse model of Alzheimer’s disease

    Science.gov (United States)

    Bezzina, Charlotte; Verret, Laure; Halley, Hélène; Dahan, Lionel; Rampon, Claire

    2015-01-01

    The cognitive reserve hypothesis claims that the brain can overcome pathology by reinforcing preexistent processes or by developing alternative cognitive strategies. Epidemiological studies have revealed that this reserve can be built throughout life experiences as education or leisure activities. We previously showed that an early transient environmental enrichment (EE) durably improves memory performances in the Tg2576 mouse model of Alzheimer’s disease (AD). Recently, we evidenced a hypersynchronous brain network activity in young adult Tg2576 mice. As aberrant oscillatory activity can contribute to memory deficits, we wondered whether the long-lasting memory improvements observed after EE were associated with a reduction of neuronal network hypersynchrony. Thus, we exposed non-transgenic (NTg) and Tg2576 mice to standard or enriched housing conditions for 10 weeks, starting at 3 months of age. Two weeks after EE period, Tg2576 mice presented similar seizure susceptibility to a GABA receptor antagonist. Immediately after and 2 weeks after this enrichment period, standard and enriched-housed Tg2576 mice did not differ with regards to the frequency of interictal spikes on their electroencephalographic (EEG) recordings. Thus, the long-lasting effect of this EE protocol on memory capacities in Tg2576 mice is not mediated by a reduction of their cerebral aberrant neuronal activity at early ages. PMID:26441640

  16. Cerebrospinal fluid neurofilament light chain as a biomarker of neurodegeneration in the Tg4510 and MitoPark mouse models

    DEFF Research Database (Denmark)

    Clement, Amalie; Mitchelmore, Cathy; Andersson, Daniel

    2017-01-01

    disorders like Alzheimer's disease (AD), Parkinson's disease (PD) and tauopathies. We hypothesized that CSF neurofilament light (NF-L) can be used to track progression of neurodegeneration and potentially monitor the efficacy of novel therapeutic agents in preclinical development. To substantiate this, we...... examined whether changes in NF-L levels in brain, plasma, and CSF reflect the changing disease status of preclinical models of neurodegeneration. Using Western Blot and ELISA we characterized NF-L and disease-related proteins in brain, CSF and plasma samples from Tg4510 mice (tauopathy/AD), MitoPark mice...... (PD), and their age-matched control littermates. We found that CSF NF-L clearly discriminates Tg4510 from control littermates, which was not observed for the MitoPark model. However, both Tg4510 and MitoPark showed altered expression and solubilization of NFs compared to control littermates. We found...

  17. Characterization of hippocampal Cajal-Retzius cells during development in a mouse model of Alzheimer’s disease (Tg2576)

    Institute of Scientific and Technical Information of China (English)

    Dongming Yu; Wenjuan Fan; Ping Wu; Jiexin Deng; Jing Liu; Yanli Niu; Mingshan Li; Jinbo Deng

    2014-01-01

    Cajal-Retzius cells are reelin-secreting neurons in the marginal zone of the neocortex and hip-pocampus. The aim of this study was to investigate Cajal-Retzius cells in Alzheimer’s disease pathology. Results revealed that the number of Cajal-Retzius cells markedly reduced with age in both wild type and in mice over-expressing the Swedish double mutant form of amyloid precur-sor protein 695 (transgenic (Tg) 2576 mice). Numerous reelin-positive neurons were positive for activated caspase 3 in Tg2576 mice, suggesting that Cajal-Retzius neuronal loss occurred via apoptosis in this Alzheimer’s disease model. Compared with wild type, the number of Cajal-Ret-zius cells was significantly lower in Tg2576 mice. Western blot analysis confirmed that reelin levels were markedly lower in Tg2576 mice than in wild-type mice. The decline in Cajal-Retzius cells in Tg2576 mice was found to occur concomitantly with the onset of Alzheimer’s disease am-yloid pathology and related behavioral deifcits. Overall, these data indicated that Cajal-Retzius cell loss occurred with the onset and development of Alzheimer’s disease.

  18. Decreased Myelinated Fibers in the Hippocampal Dentate Gyrus of the Tg2576 Mouse Model of Alzheimer’s Disease

    Science.gov (United States)

    Lu, Wei; Yang, Shu; Zhang, Lei; Chen, Lin; Chao, Feng-Lei; Luo, Yan-min; Xiao, Qian; Gu, Heng-Wei; Jiang, Rong; Tang, Yong

    2016-01-01

    Alzheimer’s disease (AD), the most common cause of dementia in the elderly, is characterized by deficits in cognition and memory. Although amyloid-β (Aβ) accumulation is known to be the earliest pathological event that triggers subsequent neurodegeneration, how Aβ accumulation causes behavioral deficits remains incompletely understood. In this study, using the Morris water maze test, ELISA and stereological methods, we examined spatial learning and memory performance, the soluble Aβ concentration and the myelination of fibers in the hippocampus of 4-, 6-, 8- and 10-month-old Tg2576 AD model mice. Our results showed that spatial learning and memory performance was significantly impaired in the Tg2576 mice compared to the wild type (WT) controls and that the myelinated fiber length in the hippocampal dentate gyrus (DG) was markedly decreased from 0.33 ± 0.03 km in the WT controls to 0.17 ± 0.02 km in the Tg2576 mice at 10 months of age. However, the concentrations of soluble Aβ40 and Aβ42 were significantly increased as early as 4-6 months of age. The decreased myelinated fiber length in the DG may contribute to the spatial learning and memory deficits of Tg2576 mice. Therefore, we suggest that the significant accumulation of soluble Aβ may serve as a preclinical biomarker for AD diagnosis and that protecting myelinated fibers may represent a novel strategy for delaying the progression of early-stage AD. PMID:26971933

  19. Modifications of hippocampal circuits and early disruption of adult neurogenesis in the tg2576 mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Alice Krezymon

    Full Text Available At advanced stages of Alzheimer's disease, cognitive dysfunction is accompanied by severe alterations of hippocampal circuits that may largely underlie memory impairments. However, it is likely that anatomical remodeling in the hippocampus may start long before any cognitive alteration is detected. Using the well-described Tg2576 mouse model of Alzheimer's disease that develops progressive age-dependent amyloidosis and cognitive deficits, we examined whether specific stages of the disease were associated with the expression of anatomical markers of hippocampal dysfunction. We found that these mice develop a complex pattern of changes in their dentate gyrus with aging. Those include aberrant expression of neuropeptide Y and reduced levels of calbindin, reflecting a profound remodeling of inhibitory and excitatory circuits in the dentate gyrus. Preceding these changes, we identified severe alterations of adult hippocampal neurogenesis in Tg2576 mice. We gathered converging data in Tg2576 mice at young age, indicating impaired maturation of new neurons that may compromise their functional integration into hippocampal circuits. Thus, disruption of adult hippocampal neurogenesis occurred before network remodeling in this mouse model and therefore may account as an early event in the etiology of Alzheimer's pathology. Ultimately, both events may constitute key components of hippocampal dysfunction and associated cognitive deficits occurring in Alzheimer's disease.

  20. Primary motor cortex alterations in Alzheimer disease: A study in the 3xTg-AD model.

    Science.gov (United States)

    Orta-Salazar, E; Feria-Velasco, A I; Díaz-Cintra, S

    2017-04-19

    In humans and animal models, Alzheimer disease (AD) is characterised by accumulation of amyloid-β peptide (Aβ) and hyperphosphorylated tau protein, neuronal degeneration, and astrocytic gliosis, especially in vulnerable brain regions (hippocampus and cortex). These alterations are associated with cognitive impairment (loss of memory) and non-cognitive impairment (motor impairment). The purpose of this study was to identify cell changes (neurons and glial cells) and aggregation of Aβ and hyperphosphorylated tau protein in the primary motor cortex (M1) in 3xTg-AD mouse models at an intermediate stage of AD. We used female 3xTg-AD mice aged 11 months and compared them to non-transgenic mice of the same age. In both groups, we assessed motor performance (open field test) and neuronal damage in M1 using specific markers: BAM10 (extracellular Aβ aggregates), tau 499 (hyperphosphorylated tau protein), GFAP (astrocytes), and Klüver-Barrera staining (neurons). Female 3xTg-AD mice in intermediate stages of the disease displayed motor and cellular alterations associated with Aβ and hyperphosphorylated tau protein deposition in M1. Patients with AD display signs and symptoms of functional impairment from early stages. According to our results, M1 cell damage in intermediate-stage AD affects motor function, which is linked to progression of the disease. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Impaired thermoregulation and beneficial effects of thermoneutrality in the 3×Tg-AD model of Alzheimer's disease.

    Science.gov (United States)

    Vandal, Milene; White, Philip J; Tournissac, Marine; Tremblay, Cyntia; St-Amour, Isabelle; Drouin-Ouellet, Janelle; Bousquet, Melanie; Traversy, Marie-Thérèse; Planel, Emmanuel; Marette, Andre; Calon, Frederic

    2016-07-01

    The sharp rise in the incidence of Alzheimer's disease (AD) at an old age coincides with a reduction in energy metabolism and core body temperature. We found that the triple-transgenic mouse model of AD (3×Tg-AD) spontaneously develops a lower basal body temperature and is more vulnerable to a cold environment compared with age-matched controls. This was despite higher nonshivering thermogenic activity, as evidenced by brown adipose tissue norepinephrine content and uncoupling protein 1 expression. A 24-hour exposure to cold (4 °C) aggravated key neuropathologic markers of AD such as: tau phosphorylation, soluble amyloid beta concentrations, and synaptic protein loss in the cortex of 3×Tg-AD mice. Strikingly, raising the body temperature of aged 3×Tg-AD mice via exposure to a thermoneutral environment improved memory function and reduced amyloid and synaptic pathologies within a week. Our results suggest the presence of a vicious cycle between impaired thermoregulation and AD-like neuropathology, and it is proposed that correcting thermoregulatory deficits might be therapeutic in AD. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Sleep-Wake Cycle Dysfunction in the TgCRND8 Mouse Model of Alzheimer's Disease: From Early to Advanced Pathological Stages.

    Directory of Open Access Journals (Sweden)

    Jessica Colby-Milley

    Full Text Available In addition to cognitive decline, individuals affected by Alzheimer's disease (AD can experience important neuropsychiatric symptoms including sleep disturbances. We characterized the sleep-wake cycle in the TgCRND8 mouse model of AD, which overexpresses a mutant human form of amyloid precursor protein resulting in high levels of β-amyloid and plaque formation by 3 months of age. Polysomnographic recordings in freely-moving mice were conducted to study sleep-wake cycle architecture at 3, 7 and 11 months of age and corresponding levels of β-amyloid in brain regions regulating sleep-wake states were measured. At all ages, TgCRND8 mice showed increased wakefulness and reduced non-rapid eye movement (NREM sleep during the resting and active phases. Increased wakefulness in TgCRND8 mice was accompanied by a shift in the waking power spectrum towards fast frequency oscillations in the beta (14-20 Hz and low gamma range (20-50 Hz. Given the phenotype of hyperarousal observed in TgCRND8 mice, the role of noradrenergic transmission in the promotion of arousal, and previous work reporting an early disruption of the noradrenergic system in TgCRND8, we tested the effects of the alpha-1-adrenoreceptor antagonist, prazosin, on sleep-wake patterns in TgCRND8 and non-transgenic (NTg mice. We found that a lower dose (2 mg/kg of prazosin increased NREM sleep in NTg but not in TgCRND8 mice, whereas a higher dose (5 mg/kg increased NREM sleep in both genotypes, suggesting altered sensitivity to noradrenergic blockade in TgCRND8 mice. Collectively our results demonstrate that amyloidosis in TgCRND8 mice is associated with sleep-wake cycle dysfunction, characterized by hyperarousal, validating this model as a tool towards understanding the relationship between β-amyloid overproduction and disrupted sleep-wake patterns in AD.

  3. Mesenchymal Stem Cells Preserve Working Memory in the 3xTg-AD Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Ruzicka, Jiri; Kulijewicz-Nawrot, Magdalena; Rodrigez-Arellano, Jose Julio; Jendelova, Pavla; Sykova, Eva

    2016-01-25

    The transplantation of stem cells may have a therapeutic effect on the pathogenesis and progression of neurodegenerative disorders. In the present study, we transplanted human mesenchymal stem cells (MSCs) into the lateral ventricle of a triple transgenic mouse model of Alzheimer's disease (3xTg-AD) at the age of eight months. We evaluated spatial reference and working memory after MSC treatment and the possible underlying mechanisms, such as the influence of transplanted MSCs on neurogenesis in the subventricular zone (SVZ) and the expression levels of a 56 kDa oligomer of amyloid β (Aβ*56), glutamine synthetase (GS) and glutamate transporters (Glutamate aspartate transporter (GLAST) and Glutamate transporter-1 (GLT-1)) in the entorhinal and prefrontal cortices and the hippocampus. At 14 months of age we observed the preservation of working memory in MSC-treated 3xTg-AD mice, suggesting that such preservation might be due to the protective effect of MSCs on GS levels and the considerable downregulation of Aβ*56 levels in the entorhinal cortex. These changes were observed six months after transplantation, accompanied by clusters of proliferating cells in the SVZ. Since the grafted cells did not survive for the whole experimental period, it is likely that the observed effects could have been transiently more pronounced at earlier time points than at six months after cell application.

  4. Mesenchymal Stem Cells Preserve Working Memory in the 3xTg-AD Mouse Model of Alzheimer’s Disease

    Directory of Open Access Journals (Sweden)

    Jiri Ruzicka

    2016-01-01

    Full Text Available The transplantation of stem cells may have a therapeutic effect on the pathogenesis and progression of neurodegenerative disorders. In the present study, we transplanted human mesenchymal stem cells (MSCs into the lateral ventricle of a triple transgenic mouse model of Alzheimer´s disease (3xTg-AD at the age of eight months. We evaluated spatial reference and working memory after MSC treatment and the possible underlying mechanisms, such as the influence of transplanted MSCs on neurogenesis in the subventricular zone (SVZ and the expression levels of a 56 kDa oligomer of amyloid β (Aβ*56, glutamine synthetase (GS and glutamate transporters (Glutamate aspartate transporter (GLAST and Glutamate transporter-1 (GLT-1 in the entorhinal and prefrontal cortices and the hippocampus. At 14 months of age we observed the preservation of working memory in MSC-treated 3xTg-AD mice, suggesting that such preservation might be due to the protective effect of MSCs on GS levels and the considerable downregulation of Aβ*56 levels in the entorhinal cortex. These changes were observed six months after transplantation, accompanied by clusters of proliferating cells in the SVZ. Since the grafted cells did not survive for the whole experimental period, it is likely that the observed effects could have been transiently more pronounced at earlier time points than at six months after cell application.

  5. Investigation into the cancer protective effect of flaxseed in Tg.NK (MMTV/c-neu) mice, a murine mammary tumor model

    DEFF Research Database (Denmark)

    Birkved, Franziska Kramer; Mortensen, Alicja; Penalvo, Jose L;

    2011-01-01

    The aim of the present study was to investigate whether low flaxseed doses relevant to human dietary exposure can prevent mammary tumors in transgenic Tg.NK mice, a model of breast cancer. Animals were exposed to flaxseed through the diet at human relevant levels. Tumor-related parameters and tumor...... to the controls. Thus, the effect of small dietary doses of flaxseed on mammary tumor development in Tg.NK mice remains to be established....

  6. Early onset of hypersynchronous network activity and expression of a marker of chronic seizures in the Tg2576 mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Charlotte Bezzina

    Full Text Available Cortical and hippocampal hypersynchrony of neuronal networks seems to be an early event in Alzheimer's disease pathogenesis. Many mouse models of the disease also present neuronal network hypersynchrony, as evidenced by higher susceptibility to pharmacologically-induced seizures, electroencephalographic seizures accompanied by spontaneous interictal spikes and expression of markers of chronic seizures such as neuropeptide Y ectopic expression in mossy fibers. This network hypersynchrony is thought to contribute to memory deficits, but whether it precedes the onset of memory deficits or not in mouse models remains unknown. The earliest memory impairments in the Tg2576 mouse model of Alzheimer's disease have been observed at 3 months of age. We thus assessed network hypersynchrony in Tg2576 and non-transgenic male mice at 1.5, 3 and 6 months of age. As soon as 1.5 months of age, Tg2576 mice presented higher seizure susceptibility to systemic injection of a GABAA receptor antagonist. They also displayed spontaneous interictal spikes on EEG recordings. Some Tg2576 mice presented hippocampal ectopic expression of neuropeptide Y which incidence seems to increase with age among the Tg2576 population. Our data reveal that network hypersynchrony appears very early in Tg2576 mice, before any demonstrated memory impairments.

  7. Early Onset of Hypersynchronous Network Activity and Expression of a Marker of Chronic Seizures in the Tg2576 Mouse Model of Alzheimer’s Disease

    Science.gov (United States)

    Bezzina, Charlotte; Verret, Laure; Juan, Cécile; Remaud, Jessica; Halley, Hélène

    2015-01-01

    Cortical and hippocampal hypersynchrony of neuronal networks seems to be an early event in Alzheimer’s disease pathogenesis. Many mouse models of the disease also present neuronal network hypersynchrony, as evidenced by higher susceptibility to pharmacologically-induced seizures, electroencephalographic seizures accompanied by spontaneous interictal spikes and expression of markers of chronic seizures such as neuropeptide Y ectopic expression in mossy fibers. This network hypersynchrony is thought to contribute to memory deficits, but whether it precedes the onset of memory deficits or not in mouse models remains unknown. The earliest memory impairments in the Tg2576 mouse model of Alzheimer’s disease have been observed at 3 months of age. We thus assessed network hypersynchrony in Tg2576 and non-transgenic male mice at 1.5, 3 and 6 months of age. As soon as 1.5 months of age, Tg2576 mice presented higher seizure susceptibility to systemic injection of a GABAA receptor antagonist. They also displayed spontaneous interictal spikes on EEG recordings. Some Tg2576 mice presented hippocampal ectopic expression of neuropeptide Y which incidence seems to increase with age among the Tg2576 population. Our data reveal that network hypersynchrony appears very early in Tg2576 mice, before any demonstrated memory impairments. PMID:25768013

  8. TG-FTIR analysis of biomass pyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Bassilakis, R.; Carangelo, R.M.; Wojtowicz, M.A. [Advanced Fuel Research Inc., Hartford, CT (United States)

    2001-10-09

    A great need exists for comprehensive biomass-pyrolysis models that could predict yields and evolution patterns of selected volatile products as a function of feedstock characteristics and process conditions. A thermogravimetric analyzer coupled with Fourier transform infrared analysis of evolving products (TG-FTIR) can provide useful input to such models in the form of kinetic information obtained under low heating rate conditions. In this work, robust TG-FTIR quantification routes were developed for infrared analysis of volatile products relevant to biomass pyrolysis. The analysis was applied to wheat straw, three types of tobacco (Burley, Oriental, and Bright) and three biomass model compounds (xylan, chlorogenic acid, and D-glucose). Product yields were compared with literature data, and species potentially quantifiable by FT-IR are reviewed. Product-evolution patterns are reported for all seven biomass samples. 41 refs., 7 figs., 2 tabs.

  9. Prolonged running, not fluoxetine treatment, increases neurogenesis, but does not alter neuropathology, in the 3xTg mouse model of Alzheimer's disease.

    Science.gov (United States)

    Marlatt, Michael W; Potter, Michelle C; Bayer, Thomas A; van Praag, Henriette; Lucassen, Paul J

    2013-01-01

    Reductions in adult neurogenesis have been documented in the original 3xTg mouse model of Alzheimer's disease (AD), notably occurring at the same age when spatial memory deficits and amyloid plaque pathology appeared. As this suggested reduced neurogenesis was associated with behavioral deficits, we tested whether activity and pharmacological stimulation could prevent memory deficits and modify neurogenesis and/or neuropathology in the 3xTg model backcrossed to the C57Bl/6 strain. We chronically administered the antidepressant fluoxetine to one group of mice, allowed access to a running wheel in another, and combined both treatments in a third cohort. All treatments lasted for 11 months. The female 3xTg mice failed to exhibit any deficits in spatial learning and memory as measured in the Morris water maze, indicating that when backcrossed to the C57Bl/6 strain, the 3xTg mice lost the behavioral phenotype that was present in the original 3xTg mouse maintained on a hybrid background. Despite this, the backcrossed 3xTg mice expressed prominent intraneuronal amyloid beta (Aβ) levels in the cortex and amygdala, with lower levels in the CA1 area of the hippocampus. In the combined cohort, fluoxetine treatment interfered with exercise and reduced the total distance run. The extent of Aβ neuropathology, the tau accumulations, or BDNF levels, were not altered by prolonged exercise. Thus, neuropathology was present but not paralleled by spatial memory deficits in the backcrossed 3xTg mouse model of AD. Prolonged exercise for 11 months did improve the long-term survival of newborn neurons generated during middle-age, whereas fluoxetine had no effect. We further review and discuss the relevant literature in this respect.

  10. Variable (Tg, Ts) Measurements of Alkane Dissociative Sticking Coefficients

    Science.gov (United States)

    Valadez, Leticia; Dewitt, Kristy; Abbott, Heather; Kolasinski, Kurt; Harrision, Ian

    2006-03-01

    Dissociative sticking coefficients S(Tg, Ts) for CH4 and C2H6 on Pt(111) have been measured as a function of gas temperature (Tg) and surface temperature (Ts) using an effusive molecular beam. Microcanonical unimolecular rate theory (MURT) was employed to extract transition state characteristics [e.g., E0(CH4) = 52.5±3.5 kJ/mol-1 and E0(C2H6) = 26.5±3 kJ/mol-1]. MURT allows our S(Tg, Ts) values to be directly compared to other supersonic molecular beam and thermal equilibrium sticking measurements. The S(Tg, Ts) depend strongly on Ts, however, only for CH4 is a strong Tg dependence observed. The fairly weak Tg dependence for C2H6 suggests that vibrational mode specific behavior and/or molecular rotations play stronger roles in the dissociative chemisorption of C2H6 than they do for CH4. Interestingly, thermal S(Tg=Ts) predictions based on MURT modeling of our CH4/Pt(111) data are three orders of magnitude higher than recent thermal equilibrium measurements on supported Pt nanocrystallite catalysts [J. M. Wei, E. Iglesia, J. Phys. Chem. B 108, 4094 (2004)].

  11. Methylene blue does not reverse existing neurofibrillary tangle pathology in the rTg4510 mouse model of tauopathy.

    Science.gov (United States)

    Spires-Jones, Tara L; Friedman, Taylor; Pitstick, Rose; Polydoro, Manuela; Roe, Allyson; Carlson, George A; Hyman, Bradley T

    2014-03-06

    Alzheimer's disease is characterized pathologically by aggregation of amyloid beta into senile plaques and aggregation of pathologically modified tau into neurofibrillary tangles. While changes in amyloid processing are strongly implicated in disease initiation, the recent failure of amyloid-based therapies has highlighted the importance of tau as a therapeutic target. "Tangle busting" compounds including methylene blue and analogous molecules are currently being evaluated as therapeutics in Alzheimer's disease. Previous studies indicated that methylene blue can reverse tau aggregation in vitro after 10 min, and subsequent studies suggested that high levels of drug reduce tau protein levels (assessed biochemically) in vivo. Here, we tested whether methylene blue could remove established neurofibrillary tangles in the rTg4510 model of tauopathy, which develops robust tangle pathology. We find that 6 weeks of methylene blue dosing in the water from 16 months to 17.5 months of age decreases soluble tau but does not remove sarkosyl insoluble tau, or histologically defined PHF1 or Gallyas positive tangle pathology. These data indicate that methylene blue treatment will likely not rapidly reverse existing tangle pathology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Age-related changes of protein SUMOylation balance in the AβPP Tg2576 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Robert eNisticò

    2014-04-01

    Full Text Available Alzheimer's disease (AD is a complex disorder that affects the central nervous system causing a severe neurodegeneration. This pathology affects an increasing number of people worldwide due to the overall aging of the human population. In recent years SUMO protein modification has emerged as a possible cellular mechanism involved in AD. Some of the proteins engaged in the physiopathological process of AD, like BACE1, GSK3-β tau, AβPP and JNK, are in fact subject to protein SUMO modifications or interactions. Here, we have investigated the SUMO/deSUMOylation balance and SUMO-related proteins during the onset and progression of the pathology in the Tg2576 mouse model of AD. We examined four age-stages (1.5; 3; 6; 17 months old and observed shows an increase in SUMO-1 protein conjugation at 3 and 6 months in transgenic mice with respect to WT in both cortex and hippocampus. Interestingly this is paralleled by increased expression levels of Ubc9 and SENP1 in both brain regions. At 6 months of age also the SUMO-1 mRNA resulted augmented. SUMO-2-ylation was surprisingly decreased in old transgenic mice and was unaltered in the other time windows. The fact that alterations in SUMO/deSUMOylation equilibrium occur from the early phases of AD suggests that global posttranslational modifications may play an important role in the mechanisms underlying disease pathogenesis, thus providing potential targets for pharmacological interventions.

  13. Highly stabilized curcumin nanoparticles tested in an in vitro blood-brain barrier model and in Alzheimer's disease Tg2576 mice.

    Science.gov (United States)

    Cheng, Kwok Kin; Yeung, Chin Fung; Ho, Shuk Wai; Chow, Shing Fung; Chow, Albert H L; Baum, Larry

    2013-04-01

    The therapeutic effects of curcumin in treating Alzheimer's disease (AD) depend on the ability to penetrate the blood-brain barrier. The latest nanoparticle technology can help to improve the bioavailability of curcumin, which is affected by the final particle size and stability. We developed a stable curcumin nanoparticle formulation to test in vitro and in AD model Tg2576 mice. Flash nanoprecipitation of curcumin, polyethylene glycol-polylactic acid co-block polymer, and polyvinylpyrrolidone in a multi-inlet vortex mixer, followed by freeze drying with β-cyclodextrin, produced dry nanocurcumin with mean particle size nanoparticles with positive treatment effects in Tg2576 mice.

  14. Prolonged Running, not Fluoxetine Treatment, Increases Neurogenesis, but does not Alter Neuropathology, in the 3xTg Mouse Model of Alzheimer's Disease.

    NARCIS (Netherlands)

    Marlatt, M.W.; Potter, M.C.; Bayer, T.A.; van Praag, H.; Lucassen, P.J.

    2013-01-01

    Reductions in adult neurogenesis have been documented in the original 3xTg mouse model of Alzheimer's disease (AD), notably occurring at the same age when spatial memory deficits and amyloid plaque pathology appeared. As this suggested reduced neurogenesis was associated with behavioral deficits, we

  15. Prolonged Running, not Fluoxetine Treatment, Increases Neurogenesis, but does not Alter Neuropathology, in the 3xTg Mouse Model of Alzheimer's Disease.

    NARCIS (Netherlands)

    Marlatt, M.W.; Potter, M.C.; Bayer, T.A.; van Praag, H.; Lucassen, P.J.

    2013-01-01

    Reductions in adult neurogenesis have been documented in the original 3xTg mouse model of Alzheimer's disease (AD), notably occurring at the same age when spatial memory deficits and amyloid plaque pathology appeared. As this suggested reduced neurogenesis was associated with behavioral deficits, we

  16. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  17. Report of the Task Group 186 on model-based dose calculation methods in brachytherapy beyond the TG-43 formalism: Current status and recommendations for clinical implementation

    Energy Technology Data Exchange (ETDEWEB)

    Beaulieu, Luc [Departement de Radio-Oncologie et Centre de Recherche en Cancerologie de l' Universite Laval, Centre hospitalier universitaire de Quebec, Quebec, Quebec G1R 2J6 (Canada) and Departement de Physique, de Genie Physique et d' Optique, Universite Laval, Quebec, Quebec G1R 2J6 (Canada); Carlsson Tedgren, Asa [Department of Medical and Health Sciences (IMH), Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE-581 85 Linkoeping (Sweden) and Swedish Radiation Safety Authority, SE-171 16 Stockholm (Sweden); Carrier, Jean-Francois [Departement de radio-oncologie, CRCHUM, Centre hospitalier de l' Universite de Montreal, Montreal, Quebec H2L 4M1 (Canada) and Departement de physique, Universite de Montreal, Montreal, Quebec H3C 3J7 (Canada); and others

    2012-10-15

    The charge of Task Group 186 (TG-186) is to provide guidance for early adopters of model-based dose calculation algorithms (MBDCAs) for brachytherapy (BT) dose calculations to ensure practice uniformity. Contrary to external beam radiotherapy, heterogeneity correction algorithms have only recently been made available to the BT community. Yet, BT dose calculation accuracy is highly dependent on scatter conditions and photoelectric effect cross-sections relative to water. In specific situations, differences between the current water-based BT dose calculation formalism (TG-43) and MBDCAs can lead to differences in calculated doses exceeding a factor of 10. MBDCAs raise three major issues that are not addressed by current guidance documents: (1) MBDCA calculated doses are sensitive to the dose specification medium, resulting in energy-dependent differences between dose calculated to water in a homogeneous water geometry (TG-43), dose calculated to the local medium in the heterogeneous medium, and the intermediate scenario of dose calculated to a small volume of water in the heterogeneous medium. (2) MBDCA doses are sensitive to voxel-by-voxel interaction cross sections. Neither conventional single-energy CT nor ICRU/ICRP tissue composition compilations provide useful guidance for the task of assigning interaction cross sections to each voxel. (3) Since each patient-source-applicator combination is unique, having reference data for each possible combination to benchmark MBDCAs is an impractical strategy. Hence, a new commissioning process is required. TG-186 addresses in detail the above issues through the literature review and provides explicit recommendations based on the current state of knowledge. TG-43-based dose prescription and dose calculation remain in effect, with MBDCA dose reporting performed in parallel when available. In using MBDCAs, it is recommended that the radiation transport should be performed in the heterogeneous medium and, at minimum, the dose to

  18. Maternal high-fat diet worsens memory deficits in the triple-transgenic (3xTgAD mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Sarah A L Martin

    Full Text Available Alzheimer's disease (AD is not normally diagnosed until later in life, although evidence suggests that the disease starts at a much earlier age. Risk factors for AD, such as diabetes, hypertension and obesity, are known to have their affects during mid-life, though events very early in life, including maternal over-nutrition, can predispose offspring to develop these conditions. This study tested whether over-nutrition during pregnancy and lactation affected the development of AD in offspring, using a transgenic AD mouse model. Female triple-transgenic AD dam mice (3xTgAD were exposed to a high-fat (60% energy from fat or control diet during pregnancy and lactation. After weaning (at 3 weeks of age, female offspring were placed on a control diet and monitored up until 12 months of age during which time behavioural tests were performed. A transient increase in body weight was observed in 4-week-old offspring 3xTgAD mice from dams fed a high-fat diet. However, by 5 weeks of age the body weight of 3xTgAD mice from the maternal high-fat fed group was no different when compared to control-fed mice. A maternal high-fat diet led to a significant impairment in memory in 2- and 12-month-old 3xTgAD offspring mice when compared to offspring from control fed dams. These effects of a maternal high-fat diet on memory were accompanied by a significant increase (50% in the number of tau positive neurones in the hippocampus. These data demonstrate that a high-fat diet during pregnancy and lactation increases memory impairments in female 3xTgAD mice and suggest that early life events during development might influence the onset and progression of AD later in life.

  19. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  20. [Morphological analysis of the hippocampal region associated with an innate behaviour task in the transgenic mouse model (3xTg-AD) for Alzheimer disease].

    Science.gov (United States)

    Orta-Salazar, E; Feria-Velasco, A; Medina-Aguirre, G I; Díaz-Cintra, S

    2013-10-01

    Different animal models for Alzheimer disease (AD) have been designed to support the hypothesis that the neurodegeneration (loss of neurons and synapses with reactive gliosis) associated with Aβ and tau deposition in these models is similar to that in the human brain. These alterations produce functional changes beginning with decreased ability to carry out daily and social life activities, memory loss, and neuropsychiatric disorders in general. Neuronal alteration plays an important role in early stages of the disease, especially in the CA1 area of hippocampus in both human and animal models. Two groups (WT and 3xTg-AD) of 11-month-old female mice were used in a behavioural analysis (nest building) and a morphometric analysis of the CA1 region of the dorsal hippocampus. The 3xTg-AD mice showed a 50% reduction in nest quality associated with a significant increase in damaged neurons in the CA1 hippocampal area (26%±6%, P<.05) compared to the WT group. The decreased ability to carry out activities of daily living (humans) or nest building (3xTg-AD mice) is related to the neuronal alterations observed in AD. These alterations are controlled by the hippocampus. Post-mortem analyses of the human hippocampus, and the CA1 region in 3xTg-AD mice, show that these areas are associated with alterations in the deposition of Aβ and tau proteins, which start accumulating in the early stages of AD. Copyright © 2013 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  1. Dietary DHA supplementation causes selective changes in phospholipids from different brain regions in both wild type mice and the Tg2576 mouse model of Alzheimer's disease.

    Science.gov (United States)

    Bascoul-Colombo, Cécile; Guschina, Irina A; Maskrey, Benjamin H; Good, Mark; O'Donnell, Valerie B; Harwood, John L

    2016-06-01

    Alzheimer's disease (AD) is of major concern in ageing populations and we have used the Tg2576 mouse model to understand connections between brain lipids and amyloid pathology. Because dietary docosahexaenoic acid (DHA) has been identified as beneficial, we compared mice fed with a DHA-supplemented diet to those on a nutritionally-sufficient diet. Major phospholipids from cortex, hippocampus and cerebellum were separated and analysed. Each phosphoglyceride had a characteristic fatty acid composition which was similar in cortex and hippocampus but different in the cerebellum. The biggest changes on DHA-supplementation were within ethanolamine phospholipids which, together with phosphatidylserine, had the highest proportions of DHA. Reciprocal alterations in DHA and arachidonate were found. The main diet-induced alterations were found in ethanolamine phospholipids, (and included their ether derivatives), as were the changes observed due to genotype. Tg mice appeared more sensitive to diet with generally lower DHA percentages when on the standard diet and higher relative proportions of DHA when the diet was supplemented. All four major phosphoglycerides analysed showed age-dependent decreases in polyunsaturated fatty acid contents. These data provide, for the first time, a detailed evaluation of phospholipids in different brain areas previously shown to be relevant to behaviour in the Tg2576 mouse model for AD. The lipid changes observed with genotype are consistent with the subtle alterations found in AD patients, especially for the ethanolamine phospholipid molecular species. They also emphasise the contrasting changes in fatty acid content induced by DHA supplementation within individual phospholipid classes.

  2. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  3. Oxidative Stress during the Progression of β-Amyloid Pathology in the Neocortex of the Tg2576 Mouse Model of Alzheimer's Disease.

    Science.gov (United States)

    Porcellotti, Sara; Fanelli, Francesca; Fracassi, Anna; Sepe, Sara; Cecconi, Francesco; Bernardi, Cinzia; Cimini, AnnaMaria; Cerù, Maria Paola; Moreno, Sandra

    2015-01-01

    Alzheimer's disease (AD) is the most common form of dementia, characterized by progressive neurodegeneration. Pathogenetic mechanisms, triggered by β-amyloid (Aβ) accumulation, include oxidative stress, derived from energy homeostasis deregulation and involving mitochondria and peroxisomes. We here addressed the oxidative stress status and the elicited cellular response at the onset and during the progression of Aβ pathology, studying the neocortex of Tg2576 model of AD. Age-dependent changes of oxidative damage markers, antioxidant enzymes, and related transcription factors were analysed in relation to the distribution of Aβ peptide and oligomers, by a combined molecular/morphological approach. Nucleic acid oxidative damage, accompanied by defective antioxidant defences, and decreased PGC1α expression are already detected in 3-month-old Tg2576 neurons. Conversely, PPARα is increased in these cells, with its cytoplasmic localization suggesting nongenomic, anti-inflammatory actions. At 6 months, when intracellular Aβ accumulates, PMP70 is downregulated, indicating impairment of fatty acids peroxisomal translocation and their consequent harmful accumulation. In 9-month-old Tg2576 neocortex, Aβ oligomers and acrolein deposition correlate with GFAP, GPX1, and PMP70 increases, supporting a compensatory response, involving astroglial peroxisomes. At severe pathological stages, when senile plaques disrupt cortical cytoarchitecture, antioxidant capacity is gradually lost. Overall, our data suggest early therapeutic intervention in AD, also targeting peroxisomes.

  4. Tau causes synapse loss without disrupting calcium homeostasis in the rTg4510 model of tauopathy.

    Directory of Open Access Journals (Sweden)

    Katherine J Kopeikina

    Full Text Available Neurofibrillary tangles (NFTs of tau are one of the defining hallmarks of Alzheimer's disease (AD, and are closely associated with neuronal degeneration. Although it has been suggested that calcium dysregulation is important to AD pathogenesis, few studies have probed the link between calcium homeostasis, synapse loss and pathological changes in tau. Here we test the hypothesis that pathological changes in tau are associated with changes in calcium by utilizing in vivo calcium imaging in adult rTg4510 mice that exhibit severe tau pathology due to over-expression of human mutant P301L tau. We observe prominent dendritic spine loss without disruptions in calcium homeostasis, indicating that tangles do not disrupt this fundamental feature of neuronal health, and that tau likely induces spine loss in a calcium-independent manner.

  5. Spared piriform cortical single-unit odor processing and odor discrimination in the Tg2576 mouse model of Alzheimer's disease.

    Science.gov (United States)

    Xu, Wenjin; Lopez-Guzman, Mirielle; Schoen, Chelsea; Fitzgerald, Shane; Lauer, Stephanie L; Nixon, Ralph A; Levy, Efrat; Wilson, Donald A

    2014-01-01

    Alzheimer's disease is a neurodegenerative disorder that is the most common cause of dementia in the elderly today. One of the earliest reported signs of Alzheimer's disease is olfactory dysfunction, which may manifest in a variety of ways. The present study sought to address this issue by investigating odor coding in the anterior piriform cortex, the primary cortical region involved in higher order olfactory function, and how it relates to performance on olfactory behavioral tasks. An olfactory habituation task was performed on cohorts of transgenic and age-matched wild-type mice at 3, 6 and 12 months of age. These animals were then anesthetized and acute, single-unit electrophysiology was performed in the anterior piriform cortex. In addition, in a separate group of animals, a longitudinal odor discrimination task was conducted from 3-12 months of age. Results showed that while odor habituation was impaired at all ages, Tg2576 performed comparably to age-matched wild-type mice on the olfactory discrimination task. The behavioral data mirrored intact anterior piriform cortex single-unit odor responses and receptive fields in Tg2576, which were comparable to wild-type at all age groups. The present results suggest that odor processing in the olfactory cortex and basic odor discrimination is especially robust in the face of amyloid β precursor protein (AβPP) over-expression and advancing amyloid β (Aβ) pathology. Odor identification deficits known to emerge early in Alzheimer's disease progression, therefore, may reflect impairments in linking the odor percept to associated labels in cortical regions upstream of the primary olfactory pathway, rather than in the basic odor processing itself.

  6. Spared piriform cortical single-unit odor processing and odor discrimination in the Tg2576 mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Wenjin Xu

    Full Text Available Alzheimer's disease is a neurodegenerative disorder that is the most common cause of dementia in the elderly today. One of the earliest reported signs of Alzheimer's disease is olfactory dysfunction, which may manifest in a variety of ways. The present study sought to address this issue by investigating odor coding in the anterior piriform cortex, the primary cortical region involved in higher order olfactory function, and how it relates to performance on olfactory behavioral tasks. An olfactory habituation task was performed on cohorts of transgenic and age-matched wild-type mice at 3, 6 and 12 months of age. These animals were then anesthetized and acute, single-unit electrophysiology was performed in the anterior piriform cortex. In addition, in a separate group of animals, a longitudinal odor discrimination task was conducted from 3-12 months of age. Results showed that while odor habituation was impaired at all ages, Tg2576 performed comparably to age-matched wild-type mice on the olfactory discrimination task. The behavioral data mirrored intact anterior piriform cortex single-unit odor responses and receptive fields in Tg2576, which were comparable to wild-type at all age groups. The present results suggest that odor processing in the olfactory cortex and basic odor discrimination is especially robust in the face of amyloid β precursor protein (AβPP over-expression and advancing amyloid β (Aβ pathology. Odor identification deficits known to emerge early in Alzheimer's disease progression, therefore, may reflect impairments in linking the odor percept to associated labels in cortical regions upstream of the primary olfactory pathway, rather than in the basic odor processing itself.

  7. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  8. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  9. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  10. Uridine prodrug improves memory in Tg2576 and TAPP mice and reduces pathological factors associated with Alzheimer's disease in related models.

    Science.gov (United States)

    Saydoff, Joel A; Olariu, Ana; Sheng, Jin; Hu, Zhongyi; Li, Qin; Garcia, Rolando; Pei, Jiong; Sun, Grace Y; von Borstel, Reid

    2013-01-01

    Uridine prodrug PN401 has been shown to have neuroprotective effects in models of Parkinson's disease and Huntington's disease. These age-related neurodegenerative diseases including Alzheimer's disease (AD) are associated with mitochondrial dysfunction, oxidative stress, and inflammation. Attenuation of these pathological factors in AD, in addition to amyloid fibrils and neurofibrillary tangles, is critical to prevent cognitive impairment. The effects of PN401 treatment were tested in the Tg2576 and Tg2576 X P301L (TAPP) mouse models of AD. Treatment with PN401 reduced impairments in the Tg2576 mice in contextual fear conditioning and novel object recognition. In the TAPP mice, PN401 reduced the impairments in novel object recognition and social transmission of food preference. PN401 also improved motor behavior and reduced anxiety-like behavior in the TAPP mice. TAPP mouse hippocampal tau phosphorylation and lipid peroxidation were reduced by PN401 treatment. Increased tau phosphorylation was significantly correlated with worsening novel object recognition memory. PN401 did not affect amyloid plaque area in the AD mice. In other AD-related animal studies, PN401 treatment reduced blood-brain barrier damage due to intracortical LPS, elevation of serum TNFα due to systemic LPS, and hippocampal CA1 neuronal loss in the gerbil stroke model. Uridine dose-dependently protected cells from chemical hypoxia and ceramide, and decreased formation of reactive oxygen species and mitochondrial DNA damage due to hydrogen peroxide. These protective effects were achieved by raising uridine levels to at least 25-50 μM and serum uridine levels in this range in humans were obtained with oral PN401.

  11. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  12. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  13. A generic high-dose rate (192)Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism.

    Science.gov (United States)

    Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Zourari, Kyveli; Papagiannis, Panagiotis; Rivard, Mark J; Siebert, Frank-André; Sloboda, Ron S; Smith, Ryan L; Thomson, Rowan M; Verhaegen, Frank; Vijande, Javier; Ma, Yunzhi; Beaulieu, Luc

    2015-06-01

    In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) (192)Ir source and a virtual water phantom were designed, which can be imported into a TPS. A hypothetical, generic HDR (192)Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic (192)Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra(®) Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS™ ]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201)(3) voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR (192)Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then

  14. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    Energy Technology Data Exchange (ETDEWEB)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Carlsson Tedgren, Åsa [Department of Medical and Health Sciences (IMH), Radiation Physics, Faculty of Health Sciences, Linköping University, Linköping SE-581 85, Sweden and Department of Medical Physics, Karolinska University Hospital, Stockholm SE-171 76 (Sweden); Granero, Domingo [Department of Radiation Physics, ERESA, Hospital General Universitario, Valencia E-46014 (Spain); Haworth, Annette [Department of Physical Sciences, Peter MacCallum Cancer Centre and Royal Melbourne Institute of Technology, Melbourne, Victoria 3000 (Australia); Mourtada, Firas [Department of Radiation Oncology, Helen F. Graham Cancer Center, Christiana Care Health System, Newark, Delaware 19713 (United States); Fonseca, Gabriel Paiva [Instituto de Pesquisas Energéticas e Nucleares – IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW, School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Zourari, Kyveli; Papagiannis, Panagiotis [Medical Physics Laboratory, Medical School, University of Athens, 75 MikrasAsias, Athens 115 27 (Greece); Rivard, Mark J. [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel, Kiel 24105 (Germany); Sloboda, Ron S. [Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta T6G 1Z2, Canada and Department of Oncology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada); and others

    2015-06-15

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by

  15. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  16. Antibodies targeted to the brain with image-guided focused ultrasound reduces amyloid-beta plaque load in the TgCRND8 mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Jessica F Jordão

    Full Text Available Immunotherapy for Alzheimer's disease (AD relies on antibodies directed against toxic amyloid-beta peptide (Abeta, which circulate in the bloodstream and remove Abeta from the brain. In mouse models of AD, the administration of anti-Abeta antibodies directly into the brain, in comparison to the bloodstream, was shown to be more efficient at reducing Abeta plaque pathology. Therefore, delivering anti-Abeta antibodies to the brain of AD patients may also improve treatment efficiency. Transcranial focused ultrasound (FUS is known to transiently-enhance the permeability of the blood-brain barrier (BBB, allowing intravenously administered therapeutics to enter the brain. Our goal was to establish that anti-Abeta antibodies delivered to the brain using magnetic resonance imaging-guided FUS (MRIgFUS can reduce plaque pathology. To test this, TgCRND8 mice received intravenous injections of MRI and FUS contrast agents, as well as anti-Abeta antibody, BAM-10. MRIgFUS was then applied transcranially. Within minutes, the MRI contrast agent entered the brain, and BAM-10 was later found bound to Abeta plaques in targeted cortical areas. Four days post-treatment, Abeta pathology was significantly reduced in TgCRND8 mice. In conclusion, this is the first report to demonstrate that MRIgFUS delivery of anti-Abeta antibodies provides the combined advantages of using a low dose of antibody and rapidly reducing plaque pathology.

  17. Differential contribution of APP metabolites to early cognitive deficits in a TgCRND8 mouse model of Alzheimer’s disease

    Science.gov (United States)

    Hamm, Valentine; Héraud, Céline; Bott, Jean-Bastien; Herbeaux, Karine; Strittmatter, Carole; Mathis, Chantal; Goutagny, Romain

    2017-01-01

    Alzheimer’s disease (AD) is a neurodegenerative pathology commonly characterized by a progressive and irreversible deterioration of cognitive functions, especially memory. Although the etiology of AD remains unknown, a consensus has emerged on the amyloid hypothesis, which posits that increased production of soluble amyloid β (Aβ) peptide induces neuronal network dysfunctions and cognitive deficits. However, the relative failures of Aβ-centric therapeutics suggest that the amyloid hypothesis is incomplete and/or that the treatments were given too late in the course of AD, when neuronal damages were already too extensive. Hence, it is striking to see that very few studies have extensively characterized, from anatomy to behavior, the alterations associated with pre-amyloid stages in mouse models of AD amyloid pathology. To fulfill this gap, we examined memory capacities as well as hippocampal network anatomy and dynamics in young adult pre-plaque TgCRND8 mice when hippocampal Aβ levels are still low. We showed that TgCRND8 mice present alterations in hippocampal inhibitory networks and γ oscillations at this stage. Further, these mice exhibited deficits only in a subset of hippocampal-dependent memory tasks, which are all affected at later stages. Last, using a pharmacological approach, we showed that some of these early memory deficits were Aβ-independent. Our results could partly explain the limited efficacy of Aβ-directed treatments and favor multitherapy approaches for early symptomatic treatment for AD.

  18. Serum TC/HDL-C,TG/HDL-C and LDL-C/HDL-C in predicting the risk of myocardial infarction in normolipidae-mic patients in South Asia:A case-control study

    Institute of Scientific and Technical Information of China (English)

    Arun Kumar; Ramiah Sivakanesan

    2008-01-01

    Dyslipidemia the major cause of atherosclerosis are suggested to act synergistically with non-lipid risk factors to increase atherogenesis.Low-density lipoprotein cholesterol (LDL-C)is the main therapeutic target in the pre-vention of CVD.Increased triglycerides (TG)and decreased high-density lipoprotein (LDL-C)are considered to be a major risk factor for the development of insulin resistant and metabolic syndrome.Although the TG/HDL-C ratio has been used in recent studies as a clinical indicator for insulin resistance,results were inconsis-tent.The TG/HDL-C ratio is also widely used to assess the lipid atherogenesis.How ever the utility of this rate for predicting coronary heart disease (CHD)risk is not clear.We encountered myocardial infarct patients with normal serum lipid concentration so this study was undertaken to evaluate the usefulness of these lipid ratios in predicting CHD risk in normolipidemic AMI patients and to compare the results with healthy subjects.The aim of the present study was to evaluate serum TC/HDL-C,TG/HDL-C and LDL-C/HDL-C in myocardial infarct subjects with normal lipid profile.To study this,lipid profile was determined in 165 normolipidemic acute myo-cardial infarction patients and 165 age/sex-matched controls.Total cholesterol,triglycerides,and HDL-cho-lesterol were analyzed enzymatically using kits obtained from Randox Laboratories Limited,Crumlin,UK. Plasma LDL-cholesterol was determined from the values of total cholesterol and HDL- cholesterol using the friedwalds formula.The values were expressed as means ± standard deviation (SD)and data from patients and controls was compared using students't'-test.The results and conclusion of the study were:Total cholester-ol,TC:HDL-C ratio,triglycerides,LDL-cholesterol,LDL:HDL-C ratio were higher in MI patients (p<0. 001).HDL-C concentration was significantly lower in MI patients than controls (p<0.001).Higher ratio of TC/HDL-C,TG/HDL-C and LDL-C/HDL-C was observed in AMI patients compared

  19. Use of thermal analysis techniques (TG-DSC) for the characterization of diverse organic municipal waste streams to predict biological stability prior to land application.

    Science.gov (United States)

    Fernández, José M; Plaza, César; Polo, Alfredo; Plante, Alain F

    2012-01-01

    The use of organic municipal wastes as soil amendments is an increasing practice that can divert significant amounts of waste from landfill, and provides a potential source of nutrients and organic matter to ameliorate degraded soils. Due to the high heterogeneity of organic municipal waste streams, it is difficult to rapidly and cost-effectively establish their suitability as soil amendments using a single method. Thermal analysis has been proposed as an evolving technique to assess the stability and composition of the organic matter present in these wastes. In this study, three different organic municipal waste streams (i.e., a municipal waste compost (MC), a composted sewage sludge (CS) and a thermally dried sewage sludge (TS)) were characterized using conventional and thermal methods. The conventional methods used to test organic matter stability included laboratory incubation with measurement of respired C, and spectroscopic methods to characterize chemical composition. Carbon mineralization was measured during a 90-day incubation, and samples before and after incubation were analyzed by chemical (elemental analysis) and spectroscopic (infrared and nuclear magnetic resonance) methods. Results were compared with those obtained by thermogravimetry (TG) and differential scanning calorimetry (DSC) techniques. Total amounts of CO(2) respired indicated that the organic matter in the TS was the least stable, while that in the CS was the most stable. This was confirmed by changes detected with the spectroscopic methods in the composition of the organic wastes due to C mineralization. Differences were especially pronounced for TS, which showed a remarkable loss of aliphatic and proteinaceous compounds during the incubation process. TG, and especially DSC analysis, clearly reflected these differences between the three organic wastes before and after the incubation. Furthermore, the calculated energy density, which represents the energy available per unit of organic

  20. Study on pyrolysis of typical medical waste materials by using TG-FTIR analysis.

    Science.gov (United States)

    Zhu, H M; Yan, J H; Jiang, X G; Lai, Y E; Cen, K F

    2008-05-01

    Pyrolysis of certain medical waste materials was studied using thermogravimetric analyzer coupled with Fourier transform infrared spectroscopy (TG-FTIR). Pyrolysis characteristics of three common materials were discussed. The pyrolysis of absorbent cotton turned out to be the most concentrative, followed by medical respirator and bamboo stick. From TG and DTG curves, pyrolysis of these three materials occurred in single, two and three stages respectively. Evolved volatile products from all these three materials included 2-butanone, benzaldehyde, formic acid, acetic acid, hydrocarbon, carbon dioxide, carbon monoxide, and water; whereas no sulphur dioxide, ammonia and hydrogen cyanide was detected. There are several differences in yield among them. However, the study in this paper is essential for medical waste pyrolysis model, the TG-FTIR approach is potential to provide valuable inputs for predictive modeling of medical waste pyrolysis. More studied are needed to get the kinetic parameters and pyrolysis models that can predict yields and evolution patterns of selected volatile products for CFD applications.

  1. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  2. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  3. A generic TG-186 shielded applicator for commissioning model-based dose calculation algorithms for high-dose-rate (192) Ir brachytherapy.

    Science.gov (United States)

    Ma, Yunzhi; Vijande, Javier; Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Zourari, Kyveli; Papagiannis, Panagiotis; Rivard, Mark J; Siebert, Frank André; Sloboda, Ron S; Smith, Ryan; Chamberland, Marc J P; Thomson, Rowan M; Verhaegen, Frank; Beaulieu, Luc

    2017-07-19

    A joint working group was created by the American Association of Physicists in Medicine (AAPM), the European Society for Radiotherapy and Oncology (ESTRO), and the Australasian Brachytherapy Group (ABG) with the charge, among others, to develop a set of well-defined test case plans and perform model-based dose calculation algorithms (MBDCA) dose calculations and comparisons. Its main goal is to facilitate a smooth transition from the AAPM Task Group No. 43 (TG-43) dose calculation formalism, widely being used in clinical practice for brachytherapy, to the one proposed by Task Group No. 186 (TG-186) for MBDCAs. To do so, in this work a hypothetical, generic high-dose rate (HDR) (192) Ir shielded applicator has been designed and benchmarked. A generic HDR (192) Ir shielded applicator was designed based on three commercially available gynecological applicators as well as a virtual cubic water phantom that can be imported into any DICOM-RT compatible treatment planning system (TPS). The absorbed dose distribution around the applicator with the TG-186 (192) Ir source located at one dwell position at its center was computed using two commercial TPSs incorporating MBDCAs (Oncentra(®) Brachy with Advanced Collapsed-cone Engine, ACE(™) , and BrachyVision ACUROS(™) ) and state-of-the-art Monte Carlo (MC) codes, including ALGEBRA, BrachyDose, egs_brachy, Geant4, MCNP6, and Penelope2008. TPS-based volumetric dose distributions for the previously reported "source centered in water" and "source displaced" test cases, and the new "source centered in applicator" test case, were analyzed here using the MCNP6 dose distribution as a reference. Volumetric dose comparisons of TPS results against results for the other MC codes were also performed. Distributions of local and global dose difference ratios are reported. The local dose differences among MC codes are comparable to the statistical uncertainties of the reference datasets for the "source centered in water" and "source

  4. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  5. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  6. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  8. Whole body exposure to 2.4 GHz WIFI signals: effects on cognitive impairment in adult triple transgenic mouse models of Alzheimer's disease (3xTg-AD).

    Science.gov (United States)

    Banaceur, Sana; Banasr, Sihem; Sakly, Mohsen; Abdelmelek, Hafedh

    2013-03-01

    The present investigation aimed at evaluating the effects of long-term exposure to WIFI type radiofrequency (RF) signals (2.40 GHz), two hours per day during one month at a Specific Absorption Rate (SAR) of 1.60 W/kg. The effects of RF exposure were studied on wildtype mice and triple transgenic mice (3xTg-AD) destined to develop Alzheimer's-like cognitive impairment. Mice were divided into four groups: two sham groups (WT, TG; n=7) and two exposed groups (WTS, TGS; n=7). The cognitive interference task used in this study was designed from an analogous human cognitive interference task including the Flex field activity system test, the two-compartment box test and the Barnes maze test. Our data demonstrate for the first time that RF improves cognitive behavior of 3xTg-AD mice. We conclude that RF exposure may represent an effective memory-enhancing approach in Alzheimer's disease.

  9. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  10. Reducing AD-like pathology in 3xTg-AD mouse model by DNA epitope vaccine - a novel immunotherapeutic strategy.

    Directory of Open Access Journals (Sweden)

    Nina Movsesyan

    Full Text Available BACKGROUND: The development of a safe and effective AD vaccine requires a delicate balance between providing an adequate anti-Abeta antibody response sufficient to provide therapeutic benefit, while eliminating an adverse T cell-mediated proinflammatory autoimmune response. To achieve this goal we have designed a prototype chemokine-based DNA epitope vaccine expressing a fusion protein that consists of 3 copies of the self-B cell epitope of Abeta(42 (Abeta(1-11 , a non-self T helper cell epitope (PADRE, and macrophage-derived chemokine (MDC/CCL22 as a molecular adjuvant to promote a strong anti-inflammatory Th2 phenotype. METHODS AND FINDINGS: We generated pMDC-3Abeta(1-11-PADRE construct and immunized 3xTg-AD mouse model starting at age of 3-4 months old. We demonstrated that prophylactic immunizations with the DNA epitope vaccine generated a robust Th2 immune response that induced high titers of anti-Abeta antibody, which in turn inhibited accumulation of Abeta pathology in the brains of older mice. Importantly, vaccination reduced glial activation and prevented the development of behavioral deficits in aged animals without increasing the incidence of microhemorrhages. CONCLUSIONS: Data from this transitional pre-clinical study suggest that our DNA epitope vaccine could be used as a safe and effective strategy for AD therapy. Future safety and immunology studies in large animals with the goal to achieve effective humoral immunity without adverse effects should help to translate this study to human clinical trials.

  11. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  12. High dietary consumption of trans fatty acids decreases brain docosahexaenoic acid but does not alter amyloid-beta and tau pathologies in the 3xTg-AD model of Alzheimer's disease.

    Science.gov (United States)

    Phivilay, A; Julien, C; Tremblay, C; Berthiaume, L; Julien, P; Giguère, Y; Calon, F

    2009-03-03

    Dietary consumption of trans fatty acids (TFA) has increased during the 20th century and is a suspected risk factor for cardiovascular diseases. More recently, high TFA intake has been associated with a higher risk of developing Alzheimer's disease (AD). To investigate the impact of TFA on an animal model genetically programmed to express amyloid-beta (Abeta) and tau pathological markers of AD, we have fed 3xTg-AD mice with either control (0% TFA/total fatty acid), high TFA (16% TFA) or very high TFA (43% TFA) isocaloric diets from 2 to 16 months of age. Effects of TFA on plasma hepatic enzymes, glucose and lipid profile were minimal but very high TFA intake decreased visceral fat of non-transgenic mice. Importantly, dietary TFA increased brain TFA concentrations in a dose-related manner. Very high TFA consumption substantially modified the brain fatty acid profile by increasing mono-unsaturated fatty acids and decreasing polyunsaturated fatty acids (PUFA). Very high TFA intake induced a shift from docosahexaenoic acid (DHA, 22:6n-3) toward n-6 docosapentaenoic acid (DPA, 22:5n-6) without altering the n-3:n-6 PUFA ratio in the cortex of both control and 3xTg-AD mice. Changes in levels of Abeta(40), Abeta(42), tau protein, phosphorylated tau protein and synaptic markers were not statistically significant in the three groups of 3xTg-AD mice, despite a trend toward decreased insoluble tau in very high TFA-fed 3xTg-AD animals. In summary, TFA intake modulated brain fatty acid profiles but had no significant effect on major brain neuropathological hallmarks of AD in an animal model.

  13. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  14. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source

    Energy Technology Data Exchange (ETDEWEB)

    White, Shane A.; Landry, Guillaume; Reniers, Brigitte, E-mail: brigitte.reniers@maastro.nl [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center (MUMC), Maastricht 6201 BN (Netherlands); Fonseca, Gabriel Paiva [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center (MUMC), Maastricht 6201 BN, The Netherlands and Instituto de Pesquisas Energéticas e Nucleares – IPEN-CNEN/SP, São Paulo CP 11049, 05422-970 (Brazil); Holt, Randy; Rusch, Thomas [Xoft, A Subsidiary of iCAD, Sunnyvale, California 94085-4115 (United States); Beaulieu, Luc [Centre Hospitalier Universitaire de Québec Université Laval, Radio-Oncologie et Centre de Recherche en Cancérologie de l’Université Laval, Québec, Québec G1R 2J6 Canada (Canada); Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center (MUMC), Maastricht 6201 BN, The Netherlands and Department of Oncology, McGill University, Montreal, Quebec H3G 1A4 (Canada)

    2014-06-15

    Purpose: The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. Methods: A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (D{sub w,m}) and dose to medium (D{sub m,m}), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. Results: All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D{sub 90} to PTV was reduced by between ∼4% and ∼40%, depending on the

  15. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  16. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  17. Novel ketone body therapy for managing Alzheimer's disease: An Editorial Highlight for Effects of a dietary ketone ester on hippocampal glycolytic and tricarboxylic acid cycle intermediates and amino acids in a 3xTgAD mouse model of Alzheimer's disease.

    Science.gov (United States)

    Puchowicz, Michelle A; Seyfried, Thomas N

    2017-03-15

    Read the highlighted article 'Effects of a dietary ketone ester on hippocampal glycolytic and tricarboxylic acid cycle intermediates and amino acids in a 3xTgAD mouse model of Alzheimer's disease' on doi: 10.1111/jnc.13958.

  18. Maysin and Its Flavonoid Derivative from Centipedegrass Attenuates Amyloid Plaques by Inducting Humoral Immune Response with Th2 Skewed Cytokine Response in the Tg (APPswe, PS1dE9) Alzheimer’s Mouse Model

    Science.gov (United States)

    Hong, Il-Hwa; Won, Chung-Kil; Bai, Hyoung-Woo; Lee, Seung Sik; Lee, SungBeom; Chung, Byung Yeoup; Cho, Jae-Hyeon

    2017-01-01

    Alzheimer’s disease (AD) is a slow, progressive neurodegenerative disease and the most common type of dementia in the elderly. The etiology of AD and its underlying mechanism are still not clear. In a previous study, we found that an ethyl acetate extract of Centipedegrass (CG) (i.e., EA-CG) contained 4 types of Maysin derivatives, including Luteolin, Isoorientin, Rhamnosylisoorientin, and Derhamnosylmaysin, and showed protective effects against Amyloid beta (Aβ) by inhibiting oligomeric Aβ in cellular and in vitro models. Here, we examined the preventative effects of EA-CG treatment on the Aβ burden in the Tg (Mo/Hu APPswe PS1dE9) AD mouse model. We have investigated the EA-CG efficacy as novel anti-AD likely preventing amyloid plaques using immunofluorescence staining to visually analyze Aβ40/42 and fibril formation with Thioflavin-S or 6E10 which are the profile of immunoreactivity against epitope Aβ1–16 or neuritic plaque, the quantitation of humoral immune response against Aβ, and the inflammatory cytokine responses (Th1 and Th2) using ELISA and QRT-PCR. To minimize the toxicity of the extracted CG, we addressed the liver toxicity in response to the CG extract treatment in Tg mice using relevant markers, such as aspartate aminotransferase (AST)/ alanine aminotransferase (ALT) measurements in serum. The EA-CG extract significantly reduced the Aβ burden, the concentration of soluble Aβ40/42 protein, and fibril formation in the hippocampus and cortex of the Tg mice treated with EA-CG (50 mg/kg BW/day) for 6 months compared with the Tg mice treated with a normal diet. Additionally, the profile of anti-inflammatory cytokines revealed that the levels of Th2 (interleukin-4 (IL-4) and interleukin-10 (IL-10)) cytokines are more significantly increased than Th1 (interferon-γ (IFN-γ), interleukin-2(IL-2)) in the sera. These results suggest that the EA-CG fraction induces IL-4/IL-10-dependent anti-inflammatory cytokines (Th2) rather than pro

  19. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  20. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  1. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  2. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  3. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  4. Sulfur removal and release behaviors of sulfur-containing model compounds during pyrolysis under inert atmosphere by TG-MS connected with Py-GC%TG-MS与Py-GC相结合法研究惰性气氛下含硫模型化合物热解过程中硫的脱除及释放行为研究

    Institute of Scientific and Technical Information of China (English)

    郭慧卿; 谢丽丽; 王鑫龙; 刘粉荣; 王美君; 胡瑞生

    2014-01-01

    Sulfur containing model compounds, tetradecyl mercaptan, dibutyl sulfide, phenyl sulfide, 2-methyl thiophene, benzothiophene and dibenzothiophene, were selected to investigate their sulfur removal and release behaviors during pyrolysis under inert atmosphere by thermo-gravimetric analyzer with mass spectrometer (TG-MS) and pyrolysis connected with gas chromatogram (Py-GC). It was found that the order of sulfur removal was tetradecyl mercaptan > dibutyl sulfide > 2-methyl thiophene > benzo thiophene > phenyl sulfide > dibenzothiophene. Except for phenylsulfide, this rule is contrary to the decomposition temperature order of the sulfur functional groups. SO 2 evolution was detected by MS and GC for all those model compounds and COS evolution was also found except for phenylsulfide and dibenzothiophene; while H2 S evolution was measured only for tetradecyl mercaptan, dibutyl sulfide and 2-methyl thiophene. However, SO2 content was much higher than H2 S and COS in pyrolysis gas for each model compound, which may be caused by that indigenous hydrogen was much less than indigenous oxygen under inert atmosphere, when actived carbon was used as carrier. Thus, most of sulfur radicals can connect with indigenous oxygen and release in the form of SO . For phenyl sulfide, benzothiophene and dibenzothiophene, as their indigenous hydrogen was not enough to react with sulfur radicals, no H2 S was detected during pyrolysis under inert atmosphere, while SO2 was found and its content was very high in pyrolysis gas.%采用热重-质谱法( TG-MS)和热解-气相色谱法( Py-MS)相结合的方法对模型化合物(十四硫醇、二丁基硫醚、苯硫醚、二甲基噻吩、苯并噻吩和二苯并噻吩等)在惰性气氛下硫的脱除及释放行为进行研究。惰性气氛下硫的脱除顺序为:十四硫醇>二丁基硫醚>二甲基噻吩>苯并噻吩>苯硫醚>二苯并噻吩,苯硫醚除外,该顺序与含硫官能团的热分解顺序一致。在热解过程中,所有

  5. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  6. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  7. Characterization of Anopheles gambiae Transglutaminase 3 (AgTG3) and Its Native Substrate Plugin*

    Science.gov (United States)

    Le, Binh V.; Nguyen, Jennifer B.; Logarajah, Shankar; Wang, Bo; Marcus, Jacob; Williams, Hazel P.; Catteruccia, Flaminia; Baxter, Richard H. G.

    2013-01-01

    Male Anopheles mosquitoes coagulate their seminal fluids via cross-linking of a substrate, called Plugin, by the seminal transglutaminase AgTG3. Formation of the “mating plug” by cross-linking Plugin is necessary for efficient sperm storage by females. AgTG3 has a similar degree of sequence identity (∼30%) to both human Factor XIII (FXIII) and tissue transglutaminase 2 (hTG2). Here we report the solution structure and in vitro activity for the cross-linking reaction of AgTG3 and Plugin. AgTG3 is a dimer in solution and exhibits Ca2+-dependent nonproteolytic activation analogous to cytoplasmic FXIII. The C-terminal domain of Plugin is predominantly α-helical with extended tertiary structure and oligomerizes in solution. The specific activity of AgTG3 was measured as 4.25 × 10−2 units mg−1. AgTG3 is less active than hTG2 assayed using the general substrate TVQQEL but has 8–10× higher relative activity when Plugin is the substrate. Mass spectrometric analysis of cross-linked Plugin detects specific peptides including a predicted consensus motif for cross-linking by AgTG3. These results support the development of AgTG3 inhibitors as specific and effective chemosterilants for A. gambiae. PMID:23288850

  8. 基于TG-FTIR的杨木热解过程中脲醛树脂影响机理的模型物研究%Influence of Urea Formaldehyde Resin on Pyrolysis of Biomass:A Modeling Study by TG-FTIR

    Institute of Scientific and Technical Information of China (English)

    李思锦; 母军; 张宇

    2014-01-01

    Pyrolysis is an efficient and recycling way to utilize waste wood-based panels,in which urea-formaldehyde resin (UF) is the main difference between wood-based board and other kinds of biomass.The present paper studied the three main compo-nents (cellulose,hemicelluloses,lignin)of poplar wood,in order to effectively and environmentally utilize or dispose of waste wood-based panels with pyrolysis technique,to study the influence of urea formaldehyde resin on pyrolytic characteristic of wood during the process of the pyrolysis of waste wood-based panels,and to in-depth explore the mechanism of the effect of UF on each component of wood.Innovatively,the weight-loss character and gas evolution rule of the model (made from cellulose,xylan and lignin,based on the chemical components stud of poplar wood),the main components as well as the ones mixed with UF were analyzed by TG-FTIR (thermogravimetric analyzer coupled to a Fourier transform infrared spectrometer).Results indicated that UF promoted the generation of water and carboxylic acid substances during the cellulose pyrolysis process.UF combined with lignin,formed some kind of unstable nitrogenous structure which produced a large amount of NH3 ,which took part in the low-temperature (200~300 ℃)pyrolysis of lignin,and directly affected the production of pyrolysis products.It can be conclu-ded that during the process of the pyrolysis of waste wood-based panels,lignin was the one that UF mainly impacted among the three main components of wood.%热解是废弃人造板高效回收利用的方式,人造板中所含胶黏剂是其不同于生物质的主要特征。为了有效环保地利用热解技术处理废弃人造板,解明人造板热解过程中其所含脲醛树脂胶黏剂(UF)对木材热解特性的影响,深入探索UF对人造板中木材各组分的作用机制,以杨木及木材的三种主要组分(纤维素、半纤维素、木素)为研究对象,创新性地依据杨木的化

  9. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  10. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  11. Early alterations in blood and brain RANTES and MCP-1 expression and the effect of exercise frequency in the 3xTg-AD mouse model of Alzheimer's disease.

    Science.gov (United States)

    Haskins, Morgan; Jones, Terry E; Lu, Qun; Bareiss, Sonja K

    2016-01-01

    Exercise has been shown to protect against cognitive decline and Alzheimer's disease (AD) progression, however the dose of exercise required to protect against AD is unknown. Recent studies show that the pathological processes leading to AD cause characteristic alterations in blood and brain inflammatory proteins that are associated with the progression of AD, suggesting that these markers could be used to diagnosis and monitor disease progression. The purpose of this study was to determine the impact of exercise frequency on AD blood chemokine profiles, and correlate these findings with chemokine brain expression changes in the triple transgenic AD (3xTg-AD) mouse model. Three month old 3xTg-AD mice were subjected to 12 weeks of moderate intensity wheel running at a frequency of either 1×/week or 3×/week. Blood and cortical tissue were analyzed for expression of monocyte chemotactic protein-1 (MCP-1) and regulated and normal T cell expressed and secreted (RANTES). Alterations in blood RANTES and MCP-1 expression were evident at 3 and 6 month old animals compared to WT animals. Three times per week exercise but not 1×/week exercise was effective at reversing serum and brain RANTES and MCP-1 expression to the levels of WT controls, revealing a dose dependent response to exercise. Analysis of these chemokines showed a strong negative correlation between blood and brain expression of RANTES. The results indicate that alterations in serum and brain inflammatory chemokines are evident as early signs of Alzheimer's disease pathology and that higher frequency exercise was necessary to restore blood and brain inflammatory expression levels in this AD mouse model.

  12. Differences between IC Analysis and TG Approach

    Institute of Scientific and Technical Information of China (English)

    张美玲

    2014-01-01

    Structuralism and generative approach are two representative syntax theories, which study language from different per-spectives. They employ different methodologies i.e. Immediate constituent (IC) Analysis and transformational-generative (TG) approach to make syntactical analysis. In this paper, these two methods will be applied to analyze some sentences for further dis-cussion and comparison. After analysis by examples, we find that both these two methods have their merits and inadequacies. To some extent, TG method can help IC analysis solve some problems. However, TG grammar is by no means complete and perfect. Improvements are needed to reach its ultimate goal of producing a universal grammar for all human languages.

  13. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  14. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  15. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  16. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  9. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  10. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  11. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  12. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  13. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  14. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  15. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  16. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  17. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  18. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  19. Distinct transmissibility features of TSE sources derived from ruminant prion diseases by the oral route in a transgenic mouse model (TgOvPrP4 overexpressing the ovine prion protein.

    Directory of Open Access Journals (Sweden)

    Jean-Noël Arsac

    Full Text Available Transmissible spongiform encephalopathies (TSEs are a group of fatal neurodegenerative diseases associated with a misfolded form of host-encoded prion protein (PrP. Some of them, such as classical bovine spongiform encephalopathy in cattle (BSE, transmissible mink encephalopathy (TME, kuru and variant Creutzfeldt-Jakob disease in humans, are acquired by the oral route exposure to infected tissues. We investigated the possible transmission by the oral route of a panel of strains derived from ruminant prion diseases in a transgenic mouse model (TgOvPrP4 overexpressing the ovine prion protein (A136R154Q171 under the control of the neuron-specific enolase promoter. Sources derived from Nor98, CH1641 or 87V scrapie sources, as well as sources derived from L-type BSE or cattle-passaged TME, failed to transmit by the oral route, whereas those derived from classical BSE and classical scrapie were successfully transmitted. Apart from a possible effect of passage history of the TSE agent in the inocula, this implied the occurrence of subtle molecular changes in the protease-resistant prion protein (PrPres following oral transmission that can raises concerns about our ability to correctly identify sheep that might be orally infected by the BSE agent in the field. Our results provide proof of principle that transgenic mouse models can be used to examine the transmissibility of TSE agents by the oral route, providing novel insights regarding the pathogenesis of prion diseases.

  20. Carbon fluxes in ecosystems of Yellowstone National Park predicted from remote sensing data and simulation modeling

    Directory of Open Access Journals (Sweden)

    Huang Shengli

    2011-08-01

    Full Text Available Abstract Background A simulation model based on remote sensing data for spatial vegetation properties has been used to estimate ecosystem carbon fluxes across Yellowstone National Park (YNP. The CASA (Carnegie Ames Stanford Approach model was applied at a regional scale to estimate seasonal and annual carbon fluxes as net primary production (NPP and soil respiration components. Predicted net ecosystem production (NEP flux of CO2 is estimated from the model for carbon sinks and sources over multi-year periods that varied in climate and (wildfire disturbance histories. Monthly Enhanced Vegetation Index (EVI image coverages from the NASA Moderate Resolution Imaging Spectroradiometer (MODIS instrument (from 2000 to 2006 were direct inputs to the model. New map products have been added to CASA from airborne remote sensing of coarse woody debris (CWD in areas burned by wildfires over the past two decades. Results Model results indicated that relatively cooler and wetter summer growing seasons were the most favorable for annual plant production and net ecosystem carbon gains in representative landscapes of YNP. When summed across vegetation class areas, the predominance of evergreen forest and shrubland (sagebrush cover was evident, with these two classes together accounting for 88% of the total annual NPP flux of 2.5 Tg C yr-1 (1 Tg = 1012 g for the entire Yellowstone study area from 2000-2006. Most vegetation classes were estimated as net ecosystem sinks of atmospheric CO2 on annual basis, making the entire study area a moderate net sink of about +0.13 Tg C yr-1. This average sink value for forested lands nonetheless masks the contribution of areas burned during the 1988 wildfires, which were estimated as net sources of CO2 to the atmosphere, totaling to a NEP flux of -0.04 Tg C yr-1 for the entire burned area. Several areas burned in the 1988 wildfires were estimated to be among the lowest in overall yearly NPP, namely the Hellroaring Fire, Mink

  1. Carbon fluxes in ecosystems of Yellowstone National Park predicted from remote sensing data and simulation modeling.

    Science.gov (United States)

    Potter, Christopher; Klooster, Steven; Crabtree, Robert; Huang, Shengli; Gross, Peggy; Genovese, Vanessa

    2011-08-11

    A simulation model based on remote sensing data for spatial vegetation properties has been used to estimate ecosystem carbon fluxes across Yellowstone National Park (YNP). The CASA (Carnegie Ames Stanford Approach) model was applied at a regional scale to estimate seasonal and annual carbon fluxes as net primary production (NPP) and soil respiration components. Predicted net ecosystem production (NEP) flux of CO2 is estimated from the model for carbon sinks and sources over multi-year periods that varied in climate and (wildfire) disturbance histories. Monthly Enhanced Vegetation Index (EVI) image coverages from the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) instrument (from 2000 to 2006) were direct inputs to the model. New map products have been added to CASA from airborne remote sensing of coarse woody debris (CWD) in areas burned by wildfires over the past two decades. Model results indicated that relatively cooler and wetter summer growing seasons were the most favorable for annual plant production and net ecosystem carbon gains in representative landscapes of YNP. When summed across vegetation class areas, the predominance of evergreen forest and shrubland (sagebrush) cover was evident, with these two classes together accounting for 88% of the total annual NPP flux of 2.5 Tg C yr-1 (1 Tg = 1012 g) for the entire Yellowstone study area from 2000-2006. Most vegetation classes were estimated as net ecosystem sinks of atmospheric CO2 on annual basis, making the entire study area a moderate net sink of about +0.13 Tg C yr-1. This average sink value for forested lands nonetheless masks the contribution of areas burned during the 1988 wildfires, which were estimated as net sources of CO2 to the atmosphere, totaling to a NEP flux of -0.04 Tg C yr-1 for the entire burned area. Several areas burned in the 1988 wildfires were estimated to be among the lowest in overall yearly NPP, namely the Hellroaring Fire, Mink Fire, and Falls Fire areas. Rates of

  2. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  3. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  4. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  5. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  6. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  7. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  8. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  9. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Aging of Dielectric Properties below Tg

    DEFF Research Database (Denmark)

    Olsen, Niels Boye; Dyre, Jeppe; Christensen, Tage Emil

    The dielectric loss at 1Hz in TPP is studied during a temperature step from one equilibrium state to another. In the applied cryostate the temperature can be equilibrated on a timescale of 1 second. The aging time dependence of the dielectric loss is studied below Tg applying temperature steps...

  11. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  12. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  13. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  14. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  15. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  16. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  17. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  18. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  19. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  20. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  1. Introducing Human APOE into Aβ Transgenic Mouse Models

    Directory of Open Access Journals (Sweden)

    Leon M. Tai

    2011-01-01

    Full Text Available Apolipoprotein E (apoE and apoE/amyloid-β (Aβ transgenic (Tg mouse models are critical to understanding apoE-isoform effects on Alzheimer's disease risk. Compared to wild type, apoE−/− mice exhibit neuronal deficits, similar to apoE4-Tg compared to apoE3-Tg mice, providing a model for Aβ-independent apoE effects on neurodegeneration. To determine the effects of apoE on Aβ-induced neuropathology, apoE−/− mice were crossed with Aβ-Tg mice, resulting in a significant delay in plaque deposition. Surprisingly, crossing human-apoE-Tg mice with apoE−/−/Aβ-Tg mice further delayed plaque deposition, which eventually developed in apoE4/Aβ-Tg mice prior to apoE3/Aβ-Tg. One approach to address hAPOE-induced temporal delay in Aβ pathology is an additional insult, like head injury. Another is crossing human-apoE-Tg mice with Aβ-Tg mice that have rapid-onset Aβ pathology. For example, because 5xFAD mice develop plaques by 2 months, the prediction is that human-apoE/5xFAD-Tg mice develop plaques around 6 months and 12 months before other human-apoE/Aβ-Tg mice. Thus, tractable models for human-apoE/Aβ-Tg mice continue to evolve.

  2. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  3. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  4. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  5. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  6. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  7. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  8. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  9. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  10. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  11. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  12. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  13. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  14. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  15. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  16. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  17. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  18. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  19. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  20. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  2. Hydroplasticization of polymers: model predictions and application to emulsion polymers.

    Science.gov (United States)

    Tsavalas, John G; Sundberg, Donald C

    2010-05-18

    The plasticization of a polymer by solvent has a dramatic impact on both its thermal and mechanical behavior. With increasing demand for zero volatile organic compound materials and coatings, water is often the sole solvent used both in the polymer synthesis and in formulation and application; latex colloids derived from emulsion polymerization are a good example. The impact of water on the glass transition temperature of a polymer thus becomes a critical physical property to predict. It has been shown here that in order to do so, one simply needs the dry state glass transition temperature (T(g)) of the (co)polymer, the T(g) of water, and the saturated weight fraction of water for the sample in question. Facile calculation of the later can be achieved using water sorption data and the group additivity method. With these readily available data, we show that a form of the Flory-Fox equation can be used to predict the hydroplasticized state of copolymers in exceptional agreement with direct experimental measurement. Furthermore, extending the prediction to include the impact of the degree of ionization for pH responsive components, only with extra knowledge of the pK(a), was also validated by experiment.

  3. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  4. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  5. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  6. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  7. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  8. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  9. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  10. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  11. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  12. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  13. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  14. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  15. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  16. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  17. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  19. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  20. Micro-scale prediction method for API-solubility in polymeric matrices and process model for forming amorphous solid dispersion by hot-melt extrusion.

    Science.gov (United States)

    Bochmann, Esther S; Neumann, Dirk; Gryczke, Andreas; Wagner, Karl G

    2016-10-01

    A new predictive micro-scale solubility and process model for amorphous solid dispersions (ASDs) by hot-melt extrusion (HME) is presented. It is based on DSC measurements consisting of an annealing step and a subsequent analysis of the glass transition temperature (Tg). The application of a complex mathematical model (BCKV-equation) to describe the dependency of Tg on the active pharmaceutical ingredient (API)/polymer ratio, enables the prediction of API solubility at ambient conditions (25°C). Furthermore, estimation of the minimal processing temperature for forming ASDs during HME trials could be defined and was additionally confirmed by X-ray powder diffraction data. The suitability of the DSC method was confirmed with melt rheological trials (small amplitude oscillatory system). As an example, ball milled physical mixtures of dipyridamole, indomethacin, itraconazole and nifedipine in poly(vinylpyrrolidone-co-vinylacetate) (copovidone) and polyvinyl caprolactam-polyvinyl acetate-polyethylene glycol graft copolymer (Soluplus®) were used.

  1. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  2. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  3. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  4. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  5. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  6. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  7. Pilgrim Dark Energy in $f(T, T_G)$ cosmology

    CERN Document Server

    Chattopadhyay, Surajit; Momeni, Davood; Myrzakulov, Ratbay

    2014-01-01

    We work on the reconstruction scenario of recently proposed dynamical dark energy model and modify gravity such as "pilgrim dark energy" (PDE) with Hubble horizon and $f(T, T_G)$. In PDE model it is assumed that a repulsive force that is accelerating the Universe is phantom type with $(w_{DE}<-1)$ and it so strong that prevents formation of the black hole. We construct the $f(T, T_G)$ models and correspondingly evaluate equation of state parameter for various choices of scale factor. Also, we assume polynomial form of $f(T, T_G)$ in terms of cosmic time and reconstruct $H$ and $w_{DE}$ in this manner. Through discussion, it is concluded that PDE shows aggressive phantom-like behavior for $s=-2$ in $f(T, T_G)$ gravity.

  8. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  9. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  10. Anti-inflammatory and anti-amyloidogenic effects of a small molecule, 2,4-bis(p-hydroxyphenyl-2-butenal in Tg2576 Alzheimer’s disease mice model

    Directory of Open Access Journals (Sweden)

    Jin Peng

    2013-01-01

    Full Text Available Abstract Background Alzheimer’s disease (AD is pathologically characterized by excessive accumulation of amyloid-beta (Aβ fibrils within the brain and activation of astrocytes and microglial cells. In this study, we examined anti-inflammatory and anti-amyloidogenic effects of 2,4-bis(p-hydroxyphenyl-2-butenal (HPB242, an anti-inflammatory compound produced by the tyrosine-fructose Maillard reaction. Methods 12-month-old Tg2576 mice were treated with HPB242 (5 mg/kg for 1 month and then cognitive function was assessed by the Morris water maze test and passive avoidance test. In addition, western blot analysis, Gel electromobility shift assay, immunostaining, immunofluorescence staining, ELISA and enzyme activity assays were used to examine the degree of Aβ deposition in the brains of Tg2576 mice. The Morris water maze task was analyzed using two-way ANOVA with repeated measures. Otherwise were analyzed by one-way ANOVA followed by Dunnett’s post hoc test. Results Treatment of HPB242 (5 mg/kg for 1 month significantly attenuated cognitive impairments in Tg2576 transgenic mice. HPB242 also prevented amyloidogenesis in Tg2576 transgenic mice brains. This can be evidenced by Aβ accumulation, BACE1, APP and C99 expression and β-secretase activity. In addition, HPB242 suppresses the expression of inducible nitric oxide synthase (iNOS and cyclooxygenase-2 (COX-2 as well as activation of astrocytes and microglial cells. Furthermore, activation of nuclear factor-kappaB (NF-κB and signal transducer and activator of transcription 1/3 (STAT1/3 in the brain was potently inhibited by HPB242. Conclusions Thus, these results suggest that HPB242 might be useful to intervene in development or progression of neurodegeneration in AD through its anti-inflammatory and anti-amyloidogenic effects.

  11. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  12. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  13. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  14. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  15. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  16. The triglyceride to high-density lipoprotein cholesterol (TG/HDL-C) ratio as a predictor of β-cell function in African American women.

    Science.gov (United States)

    Maturu, Amita; DeWitt, Peter; Kern, Philip A; Rasouli, Neda

    2015-05-01

    The TG/HDL-C ratio is used as a marker of insulin resistance (IR) in Caucasians. However, there are conflicting data on TG/HDL-C ratio as a predictor of IR in African Americans. Compared to Caucasians, African Americans have lower TG levels and increased insulin levels despite a greater risk for diabetes. We hypothesized that the TG/HDL-C ratio is predictive of IR and/or β-cell function in African American (AA) women. Non-diabetic AA women (n = 41) with a BMI > 25 kg/m(2) underwent frequently sampled intravenous glucose tolerance test (FSIGTT). Insulin sensitivity (SI) and the acute insulin response to glucose (AIRg) were measured using minimal model and β-cell function was determined by disposition index (DI = S I*AIRg). IR was defined as the lowest tertile of SI ( 0.70 was defined as significant discrimination. The mean (± SD) age was 38.5 ± 11.3 years, with BMI of 33.5 ± 6.7 kg/m(2) and fasting glucose of 86.5 ± 10.5 mg/dL. The AUC-ROC for the prediction of DI HDL-C ratio was associated with decreased DI. However, the AUC-ROC for prediction of IR or low AIRg (HDL-C ratio is a poor predictor of IR in AA women. However, we did show an inverse association between the TG/HDL-C ratio and β-cell function, suggesting that this simple tool may effectively identify AA women at risk for DM2. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  18. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  19. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  20. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  1. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  2. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  3. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  4. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  5. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  6. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  7. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  8. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  9. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  10. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  11. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  12. Alterations in synaptic plasticity coincide with deficits in spatial working memory in presymptomatic 3xTg-AD mice.

    Science.gov (United States)

    Clark, Jason K; Furgerson, Matthew; Crystal, Jonathon D; Fechheimer, Marcus; Furukawa, Ruth; Wagner, John J

    2015-11-01

    Alzheimer's disease is a neurodegenerative condition believed to be initiated by production of amyloid-beta peptide, which leads to synaptic dysfunction and progressive memory loss. Using a mouse model of Alzheimer's disease (3xTg-AD), an 8-arm radial maze was employed to assess spatial working memory. Unexpectedly, the younger (3month old) 3xTg-AD mice were as impaired in the spatial working memory task as the older (8month old) 3xTg-AD mice when compared with age-matched NonTg control animals. Field potential recordings from the CA1 region of slices prepared from the ventral hippocampus were obtained to assess synaptic transmission and capability for synaptic plasticity. At 3months of age, the NMDA receptor-dependent component of LTP was reduced in 3xTg-AD mice. However, the magnitude of the non-NMDA receptor-dependent component of LTP was concomitantly increased, resulting in a similar amount of total LTP in 3xTg-AD and NonTg mice. At 8months of age, the NMDA receptor-dependent LTP was again reduced in 3xTg-AD mice, but now the non-NMDA receptor-dependent component was decreased as well, resulting in a significantly reduced total amount of LTP in 3xTg-AD compared with NonTg mice. Both 3 and 8month old 3xTg-AD mice exhibited reductions in paired-pulse facilitation and NMDA receptor-dependent LTP that coincided with the deficit in spatial working memory. The early presence of this cognitive impairment and the associated alterations in synaptic plasticity demonstrate that the onset of some behavioral and neurophysiological consequences can occur before the detectable presence of plaques and tangles in the 3xTg-AD mouse model of Alzheimer's disease.

  13. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  14. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  15. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  16. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  17. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  18. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  19. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  20. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  1. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  2. Accurate assessment of the biodegradation of cationic surfactants in activated sludge reactors (OECD TG 303A)

    NARCIS (Netherlands)

    Geerts, R.; Ginkel, van C.G.; Plugge, C.M.

    2015-01-01

    The continuous-fed activated sludge test (OECD TG 303A) was used to predict the removal of cationic surfactants from wastewater in activated sludge plants. However, a method to differentiate between adsorption and biodegradation is not provided in these guidelines. Assessment of removal by biodegrad

  3. Accurate assessment of the biodegradation of cationic surfactants in activated sludge reactors (OECD TG 303A)

    NARCIS (Netherlands)

    Geerts, R.; Ginkel, van C.G.; Plugge, C.M.

    2015-01-01

    The continuous-fed activated sludge test (OECD TG 303A) was used to predict the removal of cationic surfactants from wastewater in activated sludge plants. However, a method to differentiate between adsorption and biodegradation is not provided in these guidelines. Assessment of removal by biodegrad

  4. Accurate assessment of the biodegradation of cationic surfactants in activated sludge reactors (OECD TG 303A)

    NARCIS (Netherlands)

    Geerts, R.; Ginkel, van C.G.; Plugge, C.M.

    2015-01-01

    The continuous-fed activated sludge test (OECD TG 303A) was used to predict the removal of cationic surfactants from wastewater in activated sludge plants. However, a method to differentiate between adsorption and biodegradation is not provided in these guidelines. Assessment of removal by

  5. Relationship TG/HDL-C and insulin resistance in adult women by nutritional status

    Directory of Open Access Journals (Sweden)

    Lorena Belén

    2014-04-01

    Full Text Available Introduction: The ratio assessment TG/HDL-C is an indicator of LDL size, facilitating the detection of individuals with increased atherogenic risk. Estimating the size of the LDL becomes important, especially in patients with TG values near the upper limit of normal values of reference and HDL-C. The objective of the study is to estimate the association between TG/HDL-C and insulin resistance (IR by nutritional status in adult women attending the Foundation for Endocrine Metabolic Diseases Research and Applied Clinical Research (FIEEM.Material and methods: Design Cross-sectional, non-pregnant adult women, apparently healthy, older than 30 years old, attending FIEEM in the Autonomous City of Buenos Aires. Dependent variable: TG/HDL-C ≥ 3.0 considered high value. Independent variables: IR by homeostatic model index HOMA-IR ≥ 2.5 categorizing the sample into two groups: with and without IR, and controlled by nutritional status using body mass index (BMI and waist circumference (CC. SPSS Statistics 15.0, calculating X2 or Fisher exact test, OR with confidence intervals of 95% and establishing logistic regression p value < 0.05.Results: We evaluated a purposive sample of 104 women (31.4% and 26% IR with TG/HDL-C high. 84.6% were overweight or obese and 88.5% increased CC. Women with BMI had significantly increased 0.15-fold increased risk (95% CI = 0.01 to 1.26 for TG/HDL-C high (p = 0.04 than the control women. There was no significance with increased CC. The ratio TG/HDL-C high IR was significantly correlated (r = 0.30 p = 0.002.Conclusions: Body weight was significantly associated with IR and the ratio TG/HDL-C increased. This ratio correlated significantly with IR in apparently healthy women.

  6. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  7. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  8. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  9. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  10. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  11. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  12. Very mild disease phenotype of congenic CftrTgH(neoimHgu cystic fibrosis mice

    Directory of Open Access Journals (Sweden)

    Leonhard-Marek Sabine

    2008-04-01

    Full Text Available Abstract Background A major boost to cystic fibrosis disease research was given by the generation of various mouse models using gene targeting in embryonal stem cells. Moreover, the introduction of the same mutation on different inbred strains generating congenic strains facilitated the search for modifier genes. From the original CftrTgH(neoimHgu mouse model with a divergent genetic background (129/Sv, C57BL/6, HsdOla:MF1 two inbred mutant mouse strains CF/1-CftrTgH(neoimHgu and CF/3-CftrTgH(neoimHgu had been generated using strict brother × sister mating. CF/1-CftrTgH(neoimHgu and CF/3-CftrTgH(neoimHgu mice were fertile and showed normal growth and lifespan. In this work the CftrTgH(neoimHgu insertional mutation was backcrossed from CF/3-CftrTgH(neoimHgu onto the inbred backgrounds C57BL/6J and DBA/2J generating congenic animals in order to clarify the differential impact of the Cftr mutation and the genetic background on the disease phenotype of the cystic fibrosis mutant mice. Clinical and electrophysiological features of the two congenic strains were compared with those of CF/1-CftrTgH(neoimHgu and CF/3-CftrTgH(neoimHgu and wild type controls. Results Under the standardized housing conditions of the animal facility, the four mouse strains CF/1-CftrTgH(neoimHgu, CF/3-CftrTgH(neoimHgu, D2.129P2(CF/3-CftrTgH(neoimHgu and B6.129P2(CF/3-CftrTgH(neoimHgu exhibited normal life expectancy. Growth of congenic cystic fibrosis mice was comparable with that of wild type controls. All mice but D2.129P2(CF/3-CftrTgH(neoimHgu females were fertile. Short circuit current measurements revealed characteristic response profiles of the HsdOla:MF1, DBA/2J and C57BL/6J backgrounds in nose, ileum and colon. All cystic fibrosis mouse lines showed the disease-typical hyperresponsiveness to amiloride in the respiratory epithelium. The mean chloride secretory responses to carbachol or forskolin were 15–100% of those of the cognate wild type control animals

  13. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  14. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  15. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  16. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  17. Pyrolysis characteristic of tobacco stem studied by Py-GC/MS, TG-FTIR, and TG-MS

    Directory of Open Access Journals (Sweden)

    Bei Liu

    2013-02-01

    Full Text Available Pyrolysis characteristics and mechanism of tobacco stem were studied by pyrolysis coupled with gas chromatography/mass spectrometry (Py-GC/MS, thermogravimetric analyzer coupled with Fourier transform infrared spectrometry, and mass spectrometry (TG-FTIR and TG-MS techniques. The composition of evolved volatiles from fast pyrolysis of tobacco stem was determined by Py-GC/MS analysis, and the evolution patterns of the major products were investigated by TG-FTIR and TG-MS. Py-GC/MS data indicated that furfural and phenol were the major products in low temperature pyrolysis, and these were generated from depolymerization of cellulose. Indene and naphthalene were the major products in high temperature pyrolysis. TG-FTIR and TG-MS results showed that CO, CO2, phenols, aldehydes, and ketones were released between 167ºC and 500ºC; at temperatures >500ºC, CO and CO2 were the main gaseous products.

  18. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  19. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  20. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  1. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  2. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  3. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  4. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  5. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  6. TG Grammar's Implications for the Foreign Language Teaching

    Institute of Scientific and Technical Information of China (English)

    殷彩

    2009-01-01

    Chomsky's Transformational-Generative (TG) grammar is another revolution to linguistics after Saussure's strueturalism, and it plays an important role in the modem linguistics. Introducing the research perspective and method of TG grammar, this paper analyses its implications for the foreign language teaching.

  7. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  8. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  9. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  10. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  11. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  12. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  13. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  14. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  15. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  16. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  17. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies

  18. A Noninvasive Score Model for Prediction of NASH in Patients with Chronic Hepatitis B and Nonalcoholic Fatty Liver Disease

    Science.gov (United States)

    Liang, Jing; Liu, Fang; Han, Tao; Jing, Li; Ma, Zhe; Gao, Yingtang

    2017-01-01

    Aims. To develop a noninvasive score model to predict NASH in patients with combined CHB and NAFLD. Objective and Methods. 65 CHB patients with NAFLD were divided into NASH group (34 patients) and non-NASH group (31 patients) according to the NAS score. Biochemical indexes, liver stiffness, and Controlled Attenuation Parameter (CAP) were determined. Data in the two groups were compared and subjected to multivariate analysis, to establish a score model for the prediction of NASH. Results. In the NASH group, ALT, TG, fasting blood glucose (FBG), M30 CK-18, CAP, and HBeAg positive ratio were significantly higher than in the non-NASH group (P analysis showed that CK-18 M30, CAP, FBG, and HBVDNA level were independent predictors of NASH. Therefore, a new model combining CK18 M30, CAP, FBG, and HBVDNA level was established using logistic regression. The AUROC curve predicting NASH was 0.961 (95% CI: 0.920–1.00, cutoff value is 0.218), with a sensitivity of 100% and specificity of 80.6%. Conclusion. A noninvasive score model might be considered for the prediction of NASH in patients with CHB combined with NAFLD.

  19. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  20. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  1. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  2. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  3. Prediction Model of Sewing Technical Condition by Grey Neural Network

    Institute of Scientific and Technical Information of China (English)

    DONG Ying; FANG Fang; ZHANG Wei-yuan

    2007-01-01

    The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.

  4. Active diagnosis of hybrid systems - A model predictive approach

    OpenAIRE

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...

  5. Evaluation of Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Proctor, Fred H.; Hamilton, David W.

    2009-01-01

    Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.

  6. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  7. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  8. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  9. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  10. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  11. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  12. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  13. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  14. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  15. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  16. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  17. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  18. MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION

    Directory of Open Access Journals (Sweden)

    Priyanka H U

    2016-09-01

    Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.

  19. The regional prediction model of PM10 concentrations for Turkey

    Science.gov (United States)

    Güler, Nevin; Güneri İşçi, Öznur

    2016-11-01

    This study is aimed to predict a regional model for weekly PM10 concentrations measured air pollution monitoring stations in Turkey. There are seven geographical regions in Turkey and numerous monitoring stations at each region. Predicting a model conventionally for each monitoring station requires a lot of labor and time and it may lead to degradation in quality of prediction when the number of measurements obtained from any õmonitoring station is small. Besides, prediction models obtained by this way only reflect the air pollutant behavior of a small area. This study uses Fuzzy C-Auto Regressive Model (FCARM) in order to find a prediction model to be reflected the regional behavior of weekly PM10 concentrations. The superiority of FCARM is to have the ability of considering simultaneously PM10 concentrations measured monitoring stations in the specified region. Besides, it also works even if the number of measurements obtained from the monitoring stations is different or small. In order to evaluate the performance of FCARM, FCARM is executed for all regions in Turkey and prediction results are compared to statistical Autoregressive (AR) Models predicted for each station separately. According to Mean Absolute Percentage Error (MAPE) criteria, it is observed that FCARM provides the better predictions with a less number of models.

  20. Gaussian mixture models as flux prediction method for central receivers

    Science.gov (United States)

    Grobler, Annemarie; Gauché, Paul; Smit, Willie

    2016-05-01

    Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.

  1. Nonlinear model predictive control of a packed distillation column

    Energy Technology Data Exchange (ETDEWEB)

    Patwardhan, A.A.; Edgar, T.F. (Univ. of Texas, Austin, TX (United States). Dept. of Chemical Engineering)

    1993-10-01

    A rigorous dynamic model based on fundamental chemical engineering principles was formulated for a packed distillation column separating a mixture of cyclohexane and n-heptane. This model was simplified to a form suitable for use in on-line model predictive control calculations. A packed distillation column was operated at several operating conditions to estimate two unknown model parameters in the rigorous and simplified models. The actual column response to step changes in the feed rate, distillate rate, and reboiler duty agreed well with dynamic model predictions. One unusual characteristic observed was that the packed column exhibited gain-sign changes, which are very difficult to treat using conventional linear feedback control. Nonlinear model predictive control was used to control the distillation column at an operating condition where the process gain changed sign. An on-line, nonlinear model-based scheme was used to estimate unknown/time-varying model parameters.

  2. Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant

    Institute of Scientific and Technical Information of China (English)

    CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian

    2007-01-01

    This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.

  3. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  4. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel;

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...... day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results Our analyses show significant differences between predictions from different models......, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each...

  5. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan;

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re......-search of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset....... The conferred results show that the prediction errors can be decreased, while the computation time is reduced....

  6. Improving Environmental Model Calibration and Prediction

    Science.gov (United States)

    2011-01-18

    groundwater model calibration. Adv. Water Resour., 29(4):605–623, 2006. [9] B.E. Skahill, J.S. Baggett, S. Frankenstein , and C.W. Downer. More efficient...of Hydrology, Environmental Modelling & Software, or Water Resources Research). Skahill, B., Baggett, J., Frankenstein , S., and Downer, C.W. (2009

  7. Identifikasi dan klasifikasi bakteri amilolitik isolat TG12, TG19, dan TG31 penyebab kemasaman pada tepung sagu basah berdasarkan analisis gen 16SrDNA

    Directory of Open Access Journals (Sweden)

    Tri Gunaedi

    2012-02-01

    Full Text Available 16SrDNA gene were known essentialy for procaryotic life involved bacteria. The gene very concerved such as usefull for bacterial identification and classification in phylogeny tree constructed. The object of this research were identified and cllassified amylolitic bacteria TG12, TG19 and TG31 isolates, causers sourness on raw starch sago by 16SrDNA gene sequences analysis approach. The native isolates from raw starch sago under traditionality processing arround Jayapura and selected depend on activity amylolitic and organic acid productivity. Before DNA genom extraction, isolates were throught out generic assignment analysis. Futhermore DNA genom were amplified and purified by PCR with 27f and 1529r primers. The pure of DNA was sequenced by ABI PRISM 310 DNA sequencer with internal primers 27f, 357f, 790f and 1230f. The generic assignment resulted those isolates related with Bacillus. The 16S rDNA data were aligned with corresponding available Bacillus sequences retrieved from the NCBI data base using the CLUSTAL X software. Phylogeny tree was constructed by PHYLIP programme and visualized by Treeview programme. Phylogenetic trees were and the extended the value of 16S rDNA sequencing in amylolitic bacteria causing sourness on raw starch sago. Completed 16S rDNA sequence data showed that two of the tested isolate TG12 formed a distinct center of diversity with Bacillus substilis DSM 10 AJ276351, isolate TG19 with Bacillus substilis strain 1778 EU982544 and TG31 similar genetic with Bacillus cereus strain WJL-063 FJ527559. Identification based on 16S rDNA gene sequences of amylolitic bacteria causing sourness on raw sago starch provided a powerfull way of uncovering genetic of strain within the spesies Bacillus substilis and Bacillus cereus.

  8. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    load shifting capabilities of the units that adapts to the given price predictions. We furthermore evaluated control performance in terms of economic savings for different control strategies and forecasts. Chapter 5 describes and compares the proposed large-scale Aggregator control strategies....... Aggregators are assumed to play an important role in the future Smart Grid and coordinate a large portfolio of units. The developed economic MPC controllers interfaces each unit directly to an Aggregator. We developed several MPC-based aggregation strategies that coordinates the global behavior of a portfolio...

  9. Combining logistic regression and neural networks to create predictive models.

    OpenAIRE

    Spackman, K. A.

    1992-01-01

    Neural networks are being used widely in medicine and other areas to create predictive models from data. The statistical method that most closely parallels neural networks is logistic regression. This paper outlines some ways in which neural networks and logistic regression are similar, shows how a small modification of logistic regression can be used in the training of neural network models, and illustrates the use of this modification for variable selection and predictive model building wit...

  10. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  11. A thermodynamic model to predict wax formation in petroleum fluids

    Energy Technology Data Exchange (ETDEWEB)

    Coutinho, J.A.P. [Universidade de Aveiro (Portugal). Dept. de Quimica. Centro de Investigacao em Quimica]. E-mail: jcoutinho@dq.ua.pt; Pauly, J.; Daridon, J.L. [Universite de Pau et des Pays de l' Adour, Pau (France). Lab. des Fluides Complexes

    2001-12-01

    Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G{sup E} model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data. (author)

  12. A THERMODYNAMIC MODEL TO PREDICT WAX FORMATION IN PETROLEUM FLUIDS

    Directory of Open Access Journals (Sweden)

    J.A.P. Coutinho

    2001-12-01

    Full Text Available Some years ago the authors proposed a model for the non-ideality of the solid phase, based on the Predictive Local Composition concept. This was first applied to the Wilson equation and latter extended to NRTL and UNIQUAC models. Predictive UNIQUAC proved to be extraordinarily successful in predicting the behaviour of both model and real hydrocarbon fluids at low temperatures. This work illustrates the ability of Predictive UNIQUAC in the description of the low temperature behaviour of petroleum fluids. It will be shown that using Predictive UNIQUAC in the description of the solid phase non-ideality a complete prediction of the low temperature behaviour of synthetic paraffin solutions, fuels and crude oils is achieved. The composition of both liquid and solid phases, the amount of crystals formed and the cloud points are predicted within the accuracy of the experimental data. The extension of Predictive UNIQUAC to high pressures, by coupling it with an EOS/G E model based on the SRK EOS used with the LCVM mixing rule, is proposed and predictions of phase envelopes for live oils are compared with experimental data.

  13. A systematic review of predictive modeling for bronchiolitis.

    Science.gov (United States)

    Luo, Gang; Nkoy, Flory L; Gesteland, Per H; Glasgow, Tiffany S; Stone, Bryan L

    2014-10-01

    Bronchiolitis is the most common cause of illness leading to hospitalization in young children. At present, many bronchiolitis management decisions are made subjectively, leading to significant practice variation among hospitals and physicians caring for children with bronchiolitis. To standardize care for bronchiolitis, researchers have proposed various models to predict the disease course to help determine a proper management plan. This paper reviews the existing state of the art of predictive modeling for bronchiolitis. Predictive modeling for respiratory syncytial virus (RSV) infection is covered whenever appropriate, as RSV accounts for about 70% of bronchiolitis cases. A systematic review was conducted through a PubMed search up to April 25, 2014. The literature on predictive modeling for bronchiolitis was retrieved using a comprehensive search query, which was developed through an iterative process. Search results were limited to human subjects, the English language, and children (birth to 18 years). The literature search returned 2312 references in total. After manual review, 168 of these references were determined to be relevant and are discussed in this paper. We identify several limitations and open problems in predictive modeling for bronchiolitis, and provide some preliminary thoughts on how to address them, with the hope to stimulate future research in this domain. Many problems remain open in predictive modeling for bronchiolitis. Future studies will need to address them to achieve optimal predictive models. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Climate predictability and prediction skill on seasonal time scales over South America from CHFP models

    Science.gov (United States)

    Osman, Marisol; Vera, C. S.

    2016-11-01

    This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to

  15. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  16. MrBayes tgMC³: a tight GPU implementation of MrBayes.

    Directory of Open Access Journals (Sweden)

    Cheng Ling

    Full Text Available MrBayes is model-based phylogenetic inference tool using Bayesian statistics. However, model-based assessment of phylogenetic trees adds to the computational burden of tree-searching, and so poses significant computational challenges. Graphics Processing Units (GPUs have been proposed as high performance, low cost acceleration platforms and several parallelized versions of the Metropolis Coupled Markov Chain Mote Carlo (MC(3 algorithm in MrBayes have been presented that can run on GPUs. However, some bottlenecks decrease the efficiency of these implementations. To address these bottlenecks, we propose a tight GPU MC(3 (tgMC(3 algorithm. tgMC(3 implements a different architecture from the one-to-one acceleration architecture employed in previously proposed methods. It merges multiply discrete GPU kernels according to the data dependency and hence decreases the number of kernels launched and the complexity of data transfer. We implemented tgMC(3 and made performance comparisons with an earlier proposed algorithm, nMC(3, and also with MrBayes MC(3 under serial and multiply concurrent CPU processes. All of the methods were benchmarked on the same computing node from DEGIMA. Experiments indicate that the tgMC(3 method outstrips nMC(3 (v1.0 with speedup factors from 2.1 to 2.7×. In addition, tgMC(3 outperforms the serial MrBayes MC(3 by a factor of 6 to 30× when using a single GTX480 card, whereas a speedup factor of around 51× can be achieved by using two GTX 480 cards on relatively long sequences. Moreover, tgMC(3 was compared with MrBayes accelerated by BEAGLE, and achieved speedup factors from 3.7 to 5.7×. The reported performance improvement of tgMC(3 is significant and appears to scale well with increasing dataset sizes. In addition, the strategy proposed in tgMC(3 could benefit the acceleration of other Bayesian-based phylogenetic analysis methods using GPUs.

  17. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  18. Models for short term malaria prediction in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Galappaththy Gawrie NL

    2008-05-01

    Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.

  19. Validation of static IMRT. TG-119 recommendations of the American Association of Physicists in Medicine; Validacion de IMRT estatica. Recomendaciones TG-119 de la AAPM

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Barrado, A.; Sanchez Jimenez, E.; Sanchez-Reyes, A.

    2013-07-01

    The implementation of radiotherapy by intensity modulated (IMRT) requires a series of previous checks that ensure the quality of the treatments. There are several national and international recommendations, and of which we highlight the conclusions of the Task Group 119 from the American Association of Physicists in Medicine (AAPM). This work describes the implementation and results of the tests proposed in the recommendations of the AAPM TG-1191, to validate an improved model of our linear accelerator (LINAC) in our Planner system. (Author)

  20. Aggregate driver model to enable predictable behaviour

    Science.gov (United States)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  1. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  2. An evaluation of the AAPM-TG43 dosimetry protocol for I-125 brachytherapy seed.

    Science.gov (United States)

    Mobit, Paul; Badragan, Iulian

    2004-07-21

    The EGSnrc Monte Carlo system has been used to calculate the dose distributions from 125I radioactive seeds (model 6711). The results showed that the agreement between EGSnrc and the American Association of Physicists in Medicine Task Group Report 43 (AAPM-TG43) dosimetry protocol is generally within +/-15% for radial distances less than 1.0 cm in both the transverse axis and longitudinal axis of the source. For radial distances between 1.0 and 2.5 cm the agreement between Monte Carlo simulations and the AAPM-TG43 dosimetry protocol is within +/-20%. In the longitudinal axis of the source the difference between Monte Carlo simulations and the AAPM-TG43 dosimetry is up to 40% for radial distances greater than 2.5 cm. The agreement between the EGSnrc/Monte Carlo simulation and the AAPM-TG43 dosimetry protocol improved significantly when recently published data of the anisotropic function were implemented (Weaver 1998 Med. Phys. 25 2271-8). The difference between Monte Carlo simulations and the AAPM-TG43 dosimetry protocol is not more than +/-10% in the transverse axis of the source up to a radius of 2.5 cm. The EGSnrc Monte Carlo simulation and the AAPM-TG43 with the Weaver anisotropic data were also used to investigate the differences in the dose distribution caused by small differences in the construction of individual seeds (Sloboda and Menon 2000 Med. Phys. 27 1789-99). The results show that a change in length of the silver rod containing the 125I radioactive material of 0.14 mm does not affect the dose distribution significantly in the transverse and longitudinal axes but a change of 0.13 mm in the thickness of the welded end of the encapsulation affected the dose significantly in the longitudinal axis of the source.

  3. Noncausal spatial prediction filtering based on an ARMA model

    Institute of Scientific and Technical Information of China (English)

    Liu Zhipeng; Chen Xiaohong; Li Jingye

    2009-01-01

    Conventional f-x prediction filtering methods are based on an autoregressive model. The error section is first computed as a source noise but is removed as additive noise to obtain the signal, which results in an assumption inconsistency before and after filtering. In this paper, an autoregressive, moving-average model is employed to avoid the model inconsistency. Based on the ARMA model, a noncasual prediction filter is computed and a self-deconvolved projection filter is used for estimating additive noise in order to suppress random noise. The 1-D ARMA model is also extended to the 2-D spatial domain, which is the basis for noncasual spatial prediction filtering for random noise attenuation on 3-D seismic data. Synthetic and field data processing indicate this method can suppress random noise more effectively and preserve the signal simultaneously and does much better than other conventional prediction filtering methods.

  4. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  5. Two Predictions of a Compound Cue Model of Priming

    OpenAIRE

    Walenski, Matthew

    2003-01-01

    This paper examines two predictions of the compound cue model of priming (Ratcliff and McKoon, 1988). While this model has been used to provide an account of a wide range of priming effects, it may not actually predict priming in these or other circumstances. In order to predict priming effects, the compound cue model relies on an assumption that all items have the same number of associates. This assumption may be true in only a restricted number of cases. This paper demonstrates that when th...

  6. Aerodynamic Noise Prediction Using stochastic Turbulence Modeling

    Directory of Open Access Journals (Sweden)

    Arash Ahmadzadegan

    2008-01-01

    Full Text Available Amongst many approaches to determine the sound propagated from turbulent flows, hybrid methods, in which the turbulent noise source field is computed or modeled separately from the far field calculation, are frequently used. For basic estimation of sound propagation, less computationally intensive methods can be developed using stochastic models of the turbulent fluctuations (turbulent noise source field. A simple and easy to use stochastic model for generating turbulent velocity fluctuations called continuous filter white noise (CFWN model was used. This method based on the use of classical Langevian-equation to model the details of fluctuating field superimposed on averaged computed quantities. The resulting sound field due to the generated unsteady flow field was evaluated using Lighthill's acoustic analogy. Volume integral method used for evaluating the acoustic analogy. This formulation presents an advantage, as it confers the possibility to determine separately the contribution of the different integral terms and also integration regions to the radiated acoustic pressure. Our results validated by comparing the directivity and the overall sound pressure level (OSPL magnitudes with the available experimental results. Numerical results showed reasonable agreement with the experiments, both in maximum directivity and magnitude of the OSPL. This method presents a very suitable tool for the noise calculation of different engineering problems in early stages of the design process where rough estimates using cheaper methods are needed for different geometries.

  7. A Predictive Model of High Shear Thrombus Growth.

    Science.gov (United States)

    Mehrabadi, Marmar; Casa, Lauren D C; Aidun, Cyrus K; Ku, David N

    2016-08-01

    The ability to predict the timescale of thrombotic occlusion in stenotic vessels may improve patient risk assessment for thrombotic events. In blood contacting devices, thrombosis predictions can lead to improved designs to minimize thrombotic risks. We have developed and validated a model of high shear thrombosis based on empirical correlations between thrombus growth and shear rate. A mathematical model was developed to predict the growth of thrombus based on the hemodynamic shear rate. The model predicts thrombus deposition based on initial geometric and fluid mechanic conditions, which are updated throughout the simulation to reflect the changing lumen dimensions. The model was validated by comparing predictions against actual thrombus growth in six separate in vitro experiments: stenotic glass capillary tubes (diameter = 345 µm) at three shear rates, the PFA-100(®) system, two microfluidic channel dimensions (heights = 300 and 82 µm), and a stenotic aortic graft (diameter = 5.5 mm). Comparison of the predicted occlusion times to experimental results shows excellent agreement. The model is also applied to a clinical angiography image to illustrate the time course of thrombosis in a stenotic carotid artery after plaque cap rupture. Our model can accurately predict thrombotic occlusion time over a wide range of hemodynamic conditions.

  8. The application of modeling and prediction with MRA wavelet network

    Institute of Scientific and Technical Information of China (English)

    LU Shu-ping; YANG Xue-jing; ZHAO Xi-ren

    2004-01-01

    As there are lots of non-linear systems in the real engineering, it is very important to do more researches on the modeling and prediction of non-linear systems. Based on the multi-resolution analysis (MRA) of wavelet theory, this paper combined the wavelet theory with neural network and established a MRA wavelet network with the scaling function and wavelet function as its neurons. From the analysis in the frequency domain, the results indicated that MRA wavelet network was better than other wavelet networks in the ability of approaching to the signals. An essential research was carried out on modeling and prediction with MRA wavelet network in the non-linear system. Using the lengthwise sway data received from the experiment of ship model, a model of offline prediction was established and was applied to the short-time prediction of ship motion. The simulation results indicated that the forecasting model improved the prediction precision effectively, lengthened the forecasting time and had a better prediction results than that of AR linear model.The research indicates that it is feasible to use the MRA wavelet network in the short -time prediction of ship motion.

  9. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  10. MJO prediction skill, predictability, and teleconnection impacts in the Beijing Climate Center Atmospheric General Circulation Model

    Science.gov (United States)

    Wu, Jie; Ren, Hong-Li; Zuo, Jinqing; Zhao, Chongbo; Chen, Lijuan; Li, Qiaoping

    2016-09-01

    This study evaluates performance of Madden-Julian oscillation (MJO) prediction in the Beijing Climate Center Atmospheric General Circulation Model (BCC_AGCM2.2). By using the real-time multivariate MJO (RMM) indices, it is shown that the MJO prediction skill of BCC_AGCM2.2 extends to about 16-17 days before the bivariate anomaly correlation coefficient drops to 0.5 and the root-mean-square error increases to the level of the climatological prediction. The prediction skill showed a seasonal dependence, with the highest skill occurring in boreal autumn, and a phase dependence with higher skill for predictions initiated from phases 2-4. The results of the MJO predictability analysis showed that the upper bounds of the prediction skill can be extended to 26 days by using a single-member estimate, and to 42 days by using the ensemble-mean estimate, which also exhibited an initial amplitude and phase dependence. The observed relationship between the MJO and the North Atlantic Oscillation was accurately reproduced by BCC_AGCM2.2 for most initial phases of the MJO, accompanied with the Rossby wave trains in the Northern Hemisphere extratropics driven by MJO convection forcing. Overall, BCC_AGCM2.2 displayed a significant ability to predict the MJO and its teleconnections without interacting with the ocean, which provided a useful tool for fully extracting the predictability source of subseasonal prediction.

  11. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  12. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  13. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  14. Prediction model for spring dust weather frequency in North China

    Institute of Scientific and Technical Information of China (English)

    LANG XianMei

    2008-01-01

    It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, I.e. Model-Ⅰ and model-Ⅱ, are then set up respectively based on observed climate data and the 32-year (1970--2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-Ⅰ, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-Ⅱ, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-Ⅰ. The model-Ⅱ can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-Ⅰ's one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-Ⅱ, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.

  15. Prediction model for spring dust weather frequency in North China

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    It is of great social and scientific importance and also very difficult to make reliable prediction for dust weather frequency (DWF) in North China. In this paper, the correlation between spring DWF in Beijing and Tianjin observation stations, taken as examples in North China, and seasonally averaged surface air temperature, precipitation, Arctic Oscillation, Antarctic Oscillation, South Oscillation, near surface meridional wind and Eurasian westerly index is respectively calculated so as to construct a prediction model for spring DWF in North China by using these climatic factors. Two prediction models, i.e. model-I and model-II, are then set up respectively based on observed climate data and the 32-year (1970 -2001) extra-seasonal hindcast experiment data as reproduced by the nine-level Atmospheric General Circulation Model developed at the Institute of Atmospheric Physics (IAP9L-AGCM). It is indicated that the correlation coefficient between the observed and predicted DWF reaches 0.933 in the model-I, suggesting a high prediction skill one season ahead. The corresponding value is high up to 0.948 for the subsequent model-II, which involves synchronous spring climate data reproduced by the IAP9L-AGCM relative to the model-I. The model-II can not only make more precise prediction but also can bring forward the lead time of real-time prediction from the model-I’s one season to half year. At last, the real-time predictability of the two models is evaluated. It follows that both the models display high prediction skill for both the interannual variation and linear trend of spring DWF in North China, and each is also featured by different advantages. As for the model-II, the prediction skill is much higher than that of original approach by use of the IAP9L-AGCM alone. Therefore, the prediction idea put forward here should be popularized in other regions in China where dust weather occurs frequently.

  16. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik;

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored...... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  17. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  18. An evaluation of mathematical models for predicting skin permeability.

    Science.gov (United States)

    Lian, Guoping; Chen, Longjian; Han, Lujia

    2008-01-01

    A number of mathematical models have been proposed for predicting skin permeability, mostly empirical and very few are deterministic. Early empirical models use simple lipophilicity parameters. The recent trend is to use more complicated molecular structure descriptors. There has been much debate on which models best predict skin permeability. This article evaluates various mathematical models using a comprehensive experimental dataset of skin permeability for 124 chemical compounds compiled from various sources. Of the seven models compared, the deterministic model of Mitragotri gives the best prediction. The simple quantitative structure permeability relationships (QSPR) model of Potts and Guy gives the second best prediction. The two models have many features in common. Both assume the lipid matrix as the pathway of transdermal permeation. Both use octanol-water partition coefficient and molecular size. Even the mathematical formulae are similar. All other empirical QSPR models that use more complicated molecular structure descriptors fail to provide satisfactory prediction. The molecular structure descriptors in the more complicated QSPR models are empirically related to skin permeation. The mechanism on how these descriptors affect transdermal permeation is not clear. Mathematically it is an ill-defined approach to use many colinearly related parameters rather than fewer independent parameters in multi-linear regression.

  19. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  20. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  1. Prediction of bypass transition with differential Reynolds stress models

    NARCIS (Netherlands)

    Westin, K.J.A.; Henkes, R.A.W.M.

    1998-01-01

    Boundary layer transition induced by high levels of free stream turbulence (FSl), so called bypass transition, can not be predicted with conventional stability calculations (e.g. the en-method). The use of turbulence models for transition prediction has shown some success for this type of flows, and

  2. Prediction Models of Free-Field Vibrations from Railway Traffic

    DEFF Research Database (Denmark)

    Malmborg, Jens; Persson, Kent; Persson, Peter

    2017-01-01

    and railways close to where people work and live. Annoyance from traffic-induced vibrations and noise is expected to be a growing issue. To predict the level of vibration and noise in buildings caused by railway and road traffic, calculation models are needed. In the present paper, a simplified prediction...

  3. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...

  4. Space Weather: Measurements, Models and Predictions

    Science.gov (United States)

    2014-03-21

    and record high levels of cosmic ray flux. There were broad-ranging terrestrial responses to this inactivity of the Sun. BC was involved in the...techniques for converting from one coordinate system (e.g., the invariant coordinate system used for the model) to another (e.g., the latitude- radius

  5. Monotone models for prediction in data mining

    NARCIS (Netherlands)

    Velikova, M.V.

    2006-01-01

    This dissertation studies the incorporation of monotonicity constraints as a type of domain knowledge into a data mining process. Monotonicity constraints are enforced at two stages¿data preparation and data modeling. The main contributions of the research are a novel procedure to test the degree of

  6. Predicting Magazine Audiences with a Loglinear Model.

    Science.gov (United States)

    1987-07-01

    important use of e.d. estimates is in media selection ( Aaker 1975; Lee 1962, 1963; Little and Lodish 1969). All advertising campaigns have a budget. It...N.Z. Listener 6061 39.0 4 0 22 References Aaker , D.A. (1975), "ADMOD:An Advertising Decision Model," Journal of Marketing Research, February, 37-45

  7. A Sensitive Tg Assay or rhTSH Stimulated Tg : What's the Best in the Long-Term Follow-Up of Patients with Differentiated Thyroid Carcinoma?

    NARCIS (Netherlands)

    Persoon, Adrienne C. M.; Jager, Pieter L.; Sluiter, Wim J.; Plukker, John T. M.; Wolffenbuttel, Bruce H. R.; Links, Thera P.

    2007-01-01

    Sensitivity of thyroglobulin (Tg) measurement in the follow-up of differentiated thyroid carcinoma (DTC) can be optimized by using a sensitive Tg assay and rhTSH stimulation. We evaluated the diagnostic yield of a sensitive Tg assay and rhTSH stimulated Tg in the detection of recurrences in the foll

  8. IMRT Commissioning: application of the AAPM's TG-119; Comissionamento de IMRT: aplicacao do TG-119 da AAPM

    Energy Technology Data Exchange (ETDEWEB)

    Zeppellini, Caroline; Furnari, Laura, E-mail: laurafurnari@hotmail.com [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Fac. de Medicina. Inst. de Radiologia

    2013-08-15

    In order to verify the commissioning of the planning of intensity-modulated radiation therapy system (IMRT), the TG-119 of the American Association of Physicists in Medicine (AAPM) was applied. Using pre defined targets and normal structures, plans were realized, absolute and relative dose were measured with an ionizing chamber and films, and the results were compared with planned values. The maximum deviation of the measurements with the ionization chamber was 3,6%, but, in the total eleven measurements, only two were bigger than the tolerance limit of 3%, recommended by TG-119. The number of points which passed criteria gamma 3% to 3 mm ranged between 96.36% and 99.92%, all measurements were within the recommended 95%. The confidence limits found for both film and for chamber were lower than those achieved in the TG-119. Our results showed a good concordance with TG-119, what means that the system is adequate for clinical applications. (author)

  9. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  10. Better predictions when models are wrong or underspecified

    NARCIS (Netherlands)

    Ommen, Matthijs van

    2015-01-01

    Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or undersp

  11. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  12. A Multistep Chaotic Model for Municipal Solid Waste Generation Prediction.

    Science.gov (United States)

    Song, Jingwei; He, Jiaying

    2014-08-01

    In this study, a univariate local chaotic model is proposed to make one-step and multistep forecasts for daily municipal solid waste (MSW) generation in Seattle, Washington. For MSW generation prediction with long history data, this forecasting model was created based on a nonlinear dynamic method called phase-space reconstruction. Compared with other nonlinear predictive models, such as artificial neural network (ANN) and partial least square-support vector machine (PLS-SVM), and a commonly used linear seasonal autoregressive integrated moving average (sARIMA) model, this method has demonstrated better prediction accuracy from 1-step ahead prediction to 14-step ahead prediction assessed by both mean absolute percentage error (MAPE) and root mean square error (RMSE). Max error, MAPE, and RMSE show that chaotic models were more reliable than the other three models. As chaotic models do not involve random walk, their performance does not vary while ANN and PLS-SVM make different forecasts in each trial. Moreover, this chaotic model was less time consuming than ANN and PLS-SVM models.

  13. Using Pareto points for model identification in predictive toxicology.

    Science.gov (United States)

    Palczewska, Anna; Neagu, Daniel; Ridley, Mick

    2013-03-22

    : Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.

  14. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  15. A Composite Model Predictive Control Strategy for Furnaces

    Institute of Scientific and Technical Information of China (English)

    Hao Zang; Hongguang Li; Jingwen Huang; Jia Wang

    2014-01-01

    Tube furnaces are essential and primary energy intensive facilities in petrochemical plants. Operational optimi-zation of furnaces could not only help to improve product quality but also benefit to reduce energy consumption and exhaust emission. Inspired by this idea, this paper presents a composite model predictive control (CMPC) strategy, which, taking advantage of distributed model predictive control architectures, combines tracking nonlinear model predictive control and economic nonlinear model predictive control metrics to keep process running smoothly and optimize operational conditions. The control ers connected with two kinds of communi-cation networks are easy to organize and maintain, and stable to process interferences. A fast solution algorithm combining interior point solvers and Newton's method is accommodated to the CMPC realization, with reason-able CPU computing time and suitable online applications. Simulation for industrial case demonstrates that the proposed approach can ensure stable operations of furnaces, improve heat efficiency, and reduce the emission effectively.

  16. Submission Form for Peer-Reviewed Cancer Risk Prediction Models

    Science.gov (United States)

    If you have information about a peer-reviewd cancer risk prediction model that you would like to be considered for inclusion on this list, submit as much information as possible through the form on this page.

  17. ACCIDENT PREDICTION MODELS FOR UNSIGNALISED URBAN JUNCTIONS IN GHANA

    Directory of Open Access Journals (Sweden)

    Mohammed SALIFU, MSc., PhD, MIHT, MGhIE

    2004-01-01

    The accident prediction models developed have a potentially wide area of application and their systematic use is likely to improve considerably the quality and delivery of the engineering aspects of accident mitigation and prevention in Ghana.

  18. Using a Prediction Model to Manage Cyber Security Threats

    National Research Council Canada - National Science Library

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    .... The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security...

  19. Development of a multi-year climate prediction model | Alexander ...

    African Journals Online (AJOL)

    Development of a multi-year climate prediction model. ... The available water resources in Southern Africa are rapidly approaching the limits of economic exploitation. ... that could be attributed to climate change arising from human activities.

  20. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  1. Haskell financial data modeling and predictive analytics

    CERN Document Server

    Ryzhov, Pavel

    2013-01-01

    This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.

  2. Mesoscale Wind Predictions for Wave Model Evaluation

    Science.gov (United States)

    2016-06-07

    N0001400WX20041(B) http://www.nrlmry.navy.mil LONG TERM GOALS The long-term goal is to demonstrate the significance and importance of high...ocean waves by an appropriate wave model. OBJECTIVES The main objectives of this project are to: 1. Build the infrastructure to generate the...temperature for all COAMPS grids at the resolution of each of these grids. These analyses are important for the proper 2 specification of the lower

  3. Modeling Seizure Self-Prediction: An E-Diary Study

    Science.gov (United States)

    Haut, Sheryl R.; Hall, Charles B.; Borkowski, Thomas; Tennen, Howard; Lipton, Richard B.

    2013-01-01

    Purpose A subset of patients with epilepsy successfully self-predicted seizures in a paper diary study. We conducted an e-diary study to ensure that prediction precedes seizures, and to characterize the prodromal features and time windows that underlie self-prediction. Methods Subjects 18 or older with LRE and ≥3 seizures/month maintained an e-diary, reporting AM/PM data daily, including mood, premonitory symptoms, and all seizures. Self-prediction was rated by, “How likely are you to experience a seizure [time frame]”? Five choices ranged from almost certain (>95% chance) to very unlikely. Relative odds of seizure (OR) within time frames was examined using Poisson models with log normal random effects to adjust for multiple observations. Key Findings Nineteen subjects reported 244 eligible seizures. OR for prediction choices within 6hrs was as high as 9.31 (1.92,45.23) for “almost certain”. Prediction was most robust within 6hrs of diary entry, and remained significant up to 12hrs. For 9 best predictors, average sensitivity was 50%. Older age contributed to successful self-prediction, and self-prediction appeared to be driven by mood and premonitory symptoms. In multivariate modeling of seizure occurrence, self-prediction (2.84; 1.68,4.81), favorable change in mood (0.82; 0.67,0.99) and number of premonitory symptoms (1,11; 1.00,1.24) were significant. Significance Some persons with epilepsy can self-predict seizures. In these individuals, the odds of a seizure following a positive prediction are high. Predictions were robust, not attributable to recall bias, and were related to self awareness of mood and premonitory features. The 6-hour prediction window is suitable for the development of pre-emptive therapy. PMID:24111898

  4. Personalized Predictive Modeling and Risk Factor Identification using Patient Similarity.

    Science.gov (United States)

    Ng, Kenney; Sun, Jimeng; Hu, Jianying; Wang, Fei

    2015-01-01

    Personalized predictive models are customized for an individual patient and trained using information from similar patients. Compared to global models trained on all patients, they have the potential to produce more accurate risk scores and capture more relevant risk factors for individual patients. This paper presents an approach for building personalized predictive models and generating personalized risk factor profiles. A locally supervised metric learning (LSML) similarity measure is trained for diabetes onset and used to find clinically similar patients. Personalized risk profiles are created by analyzing the parameters of the trained personalized logistic regression models. A 15,000 patient data set, derived from electronic health records, is used to evaluate the approach. The predictive results show that the personalized models can outperform the global model. Cluster analysis of the risk profiles show groups of patients with similar risk factors, differences in the top risk factors for different groups of patients and differences between the individual and global risk factors.

  5. Model predictive control of P-time event graphs

    Science.gov (United States)

    Hamri, H.; Kara, R.; Amari, S.

    2016-12-01

    This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.

  6. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  7. Using connectome-based predictive modeling to predict individual behavior from brain connectivity.

    Science.gov (United States)

    Shen, Xilin; Finn, Emily S; Scheinost, Dustin; Rosenberg, Monica D; Chun, Marvin M; Papademetris, Xenophon; Constable, R Todd

    2017-03-01

    Neuroimaging is a fast-developing research area in which anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale data sets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: (i) feature selection, (ii) feature summarization, (iii) model building, and (iv) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a considerable amount of the variance in these measures. It has been demonstrated that the CPM protocol performs as well as or better than many of the existing approaches in brain-behavior prediction. As CPM focuses on linear modeling and a purely data-driven approach, neuroscientists with limited or no experience in machine learning or optimization will find it easy to implement these protocols. Depending on the volume of data to be processed, the protocol can take 10-100 min for model building, 1-48 h for permutation testing, and 10-20 min for visualization of results.

  8. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  9. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.|info:eu-repo/dai/nl/073087998

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  10. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  11. Catalytic cracking models developed for predictive control purposes

    Directory of Open Access Journals (Sweden)

    Dag Ljungqvist

    1993-04-01

    Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.

  12. Technical note: A linear model for predicting δ13 Cprotein.

    Science.gov (United States)

    Pestle, William J; Hubbe, Mark; Smith, Erin K; Stevenson, Joseph M

    2015-08-01

    Development of a model for the prediction of δ(13) Cprotein from δ(13) Ccollagen and Δ(13) Cap-co . Model-generated values could, in turn, serve as "consumer" inputs for multisource mixture modeling of paleodiet. Linear regression analysis of previously published controlled diet data facilitated the development of a mathematical model for predicting δ(13) Cprotein (and an experimentally generated error term) from isotopic data routinely generated during the analysis of osseous remains (δ(13) Cco and Δ(13) Cap-co ). Regression analysis resulted in a two-term linear model (δ(13) Cprotein (%) = (0.78 × δ(13) Cco ) - (0.58× Δ(13) Cap-co ) - 4.7), possessing a high R-value of 0.93 (r(2)  = 0.86, P < 0.01), and experimentally generated error terms of ±1.9% for any predicted individual value of δ(13) Cprotein . This model was tested using isotopic data from Formative Period individuals from northern Chile's Atacama Desert. The model presented here appears to hold significant potential for the prediction of the carbon isotope signature of dietary protein using only such data as is routinely generated in the course of stable isotope analysis of human osseous remains. These predicted values are ideal for use in multisource mixture modeling of dietary protein source contribution. © 2015 Wiley Periodicals, Inc.

  13. Traffic Prediction Scheme based on Chaotic Models in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Xiangrong Feng

    2013-09-01

    Full Text Available Based on the local support vector algorithm of chaotic time series analysis, the Hannan-Quinn information criterion and SAX symbolization are introduced. Then a novel prediction algorithm is proposed, which is successfully applied to the prediction of wireless network traffic. For the correct prediction problems of short-term flow with smaller data set size, the weakness of the algorithms during model construction is analyzed by study and comparison to LDK prediction algorithm. It is verified the Hannan-Quinn information principle can be used to calculate the number of neighbor points to replace pervious empirical method, which uses the number of neighbor points to acquire more accurate prediction model. Finally, actual flow data is applied to confirm the accuracy rate of the proposed algorithm LSDHQ. It is testified by our experiments that it also has higher performance in adaptability than that of LSDHQ algorithm.

  14. Toward a predictive model for elastomer seals

    Science.gov (United States)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.

  15. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  16. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  17. Active diagnosis of hybrid systems - A model predictive approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh;

    2009-01-01

    A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty...... outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeated until the fault is detected by a passive diagnoser. It is demonstrated how the generated excitation signal...

  18. Aero-acoustic noise of wind turbines. Noise prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Maribo Pedersen, B. [ed.

    1997-12-31

    Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)

  19. Application of AAPM TG 119 to volumetric arc therapy (VMAT).

    Science.gov (United States)

    Mynampati, Dinesh Kumar; Yaparpalvi, Ravindra; Hong, Linda; Kuo, Hsiang-Chi; Mah, Dennis

    2012-09-06

    The purpose of this study was to create AAPM TG 119 benchmark plans for volumetric arc therapy (VMAT) and to compare VMAT plans with IMRT plan data. AAPM TG 119 proposes a set of test clinical cases for testing the accuracy of IMRT planning and delivery system. For these test cases, we generated two treatment plans, the first plan using 7-9 static dMLC IMRT fields and a second plan utilizing one- or two-arc VMAT technique. Dose optimization and calculations performed using 6 MV photons and Eclipse treatment planning system. Dose prescription and planning objectives were set according to the TG 119 goals. Plans were scored based on TG 119 planning objectives. Treatment plans were compared using conformity index (CI) for reference dose and homogeneity index (HI) (for D(5)-D(95)). For test cases prostate, head-and-neck, C-shape and multitarget prescription dose are 75.6 Gy, 50.4 Gy, 50 Gy and 50 Gy, respectively. VMAT dose distributions were comparable to dMLC IMRT plans. Our planning results matched TG 119 planning results. For treatment plans studied, conformity indices ranged from 1.05-1.23 (IMRT) and 1.04-1.23 (VMAT). Homogeneity indices ranged from 4.6%-11.0% (IMRT) and 4.6%-10.5% (VMAT). The ratio of total monitor units necessary for dMLC IMRT to that of VMAT was in the range of 1.1-2.0. AAPM TG 119 test cases are useful to generate VMAT benchmark plans. At preclinical implementation stage, plan comparison of VMAT and IMRT plans of AAPM TG 119 test case allowed us to understand basic capabilities of VMAT technique.

  20. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard;

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  1. Modelling and prediction of non-stationary optical turbulence behaviour

    Science.gov (United States)

    Doelman, Niek; Osborn, James

    2016-07-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument installed at the Isaac Newton Telescope at La Palma. Based on an estimate of the power spectral density function, a low order stochastic model to capture the temporal variability of r0 is proposed. The impact of this type of stochastic model on the prediction of the coherence length behaviour is shown.

  2. Research on Drag Torque Prediction Model for the Wet Clutches

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Considering the surface tension effect and centrifugal effect, a mathematical model based on Reynolds equation for predicting the drag torque of disengage wet clutches is presented. The model indicates that the equivalent radius is a function of clutch speed and flow rate. The drag torque achieves its peak at a critical speed. Above this speed, drag torque drops due to the shrinking of the oil film. The model also points out that viscosity and flow rate effects on drag torque. Experimental results indicate that the model is reasonable and it performs well for predicting the drag torque peak.

  3. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  4. Development and application of chronic disease risk prediction models.

    Science.gov (United States)

    Oh, Sun Min; Stefani, Katherine M; Kim, Hyeon Chang

    2014-07-01

    Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea.

  5. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  6. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  7. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  8. Model Predictive Control of Wind Turbines

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian

    the need for maintenance of the wind turbine. Either way, better total-cost-of-ownership for wind turbine operators can be achieved by improved control of the wind turbines. Wind turbine control can be improved in two ways, by improving the model on which the controller bases its design or by improving......Wind turbines play a major role in the transformation from a fossil fuel based energy production to a more sustainable production of energy. Total-cost-of-ownership is an important parameter when investors decide in which energy technology they should place their capital. Modern wind turbines...... are controlled by pitching the blades and by controlling the electro-magnetic torque of the generator, thus slowing the rotation of the blades. Improved control of wind turbines, leading to reduced fatigue loads, can be exploited by using less materials in the construction of the wind turbine or by reducing...

  9. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  10. Generalized ghost pilgrim dark energy in F(T,TG) cosmology

    Science.gov (United States)

    Sharif, M.; Nazir, Kanwal

    2016-07-01

    This paper is devoted to study the generalized ghost pilgrim dark energy (PDE) model in F(T,TG) gravity with flat Friedmann-Robertson-Walker (FRW) universe. In this scenario, we reconstruct F(T,TG) models and evaluate the corresponding equation of state (EoS) parameter for different choices of the scale factors. We assume power-law scale factor, scale factor for unification of two phases, intermediate and bouncing scale factor. We study the behavior of reconstructed models and EoS parameters graphically. It is found that all the reconstructed models show decreasing behavior for PDE parameter u = -2. On the other hand, the EoS parameter indicates transition from dust-like matter to phantom era for all choices of the scale factor except intermediate for which this is less than - 1. We conclude that all the results are in agreement with PDE phenomenon.

  11. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  12. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    van Waterschoot Toon

    2008-01-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  13. Hidden Markov models for prediction of protein features

    DEFF Research Database (Denmark)

    Bystroff, Christopher; Krogh, Anders

    2008-01-01

    Hidden Markov Models (HMMs) are an extremely versatile statistical representation that can be used to model any set of one-dimensional discrete symbol data. HMMs can model protein sequences in many ways, depending on what features of the protein are represented by the Markov states. For protein...... structure prediction, states have been chosen to represent either homologous sequence positions, local or secondary structure types, or transmembrane locality. The resulting models can be used to predict common ancestry, secondary or local structure, or membrane topology by applying one of the two standard...... algorithms for comparing a sequence to a model. In this chapter, we review those algorithms and discuss how HMMs have been constructed and refined for the purpose of protein structure prediction....

  14. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...... connectivity approach. The development of these models requires measured property data and based on them, the regression of model parameters is performed. Although this class of models is empirical by nature, they do allow extrapolation from the regressed model parameters to predict properties of chemicals...... not included in the measured data-set. Therefore, they are also considered as predictive models. The paper will highlight different issues/challenges related to the role of the databases and the mathematical and thermodynamic consistency of the measured/estimated data, the predictive nature of the developed...

  15. Modeling, Prediction, and Control of Heating Temperature for Tube Billet

    Directory of Open Access Journals (Sweden)

    Yachun Mao

    2015-01-01

    Full Text Available Annular furnaces have multivariate, nonlinear, large time lag, and cross coupling characteristics. The prediction and control of the exit temperature of a tube billet are important but difficult. We establish a prediction model for the final temperature of a tube billet through OS-ELM-DRPLS method. We address the complex production characteristics, integrate the advantages of PLS and ELM algorithms in establishing linear and nonlinear models, and consider model update and data lag. Based on the proposed model, we design a prediction control algorithm for tube billet temperature. The algorithm is validated using the practical production data of Baosteel Co., Ltd. Results show that the model achieves the precision required in industrial applications. The temperature of the tube billet can be controlled within the required temperature range through compensation control method.

  16. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  17. Experimental study on prediction model for maximum rebound ratio

    Institute of Scientific and Technical Information of China (English)

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong

    2007-01-01

    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  18. Groundwater Level Prediction using M5 Model Trees

    Science.gov (United States)

    Nalarajan, Nitha Ayinippully; Mohandas, C.

    2015-01-01

    Groundwater is an important resource, readily available and having high economic value and social benefit. Recently, it had been considered a dependable source of uncontaminated water. During the past two decades, increased rate of extraction and other greedy human actions have resulted in the groundwater crisis, both qualitatively and quantitatively. Under prevailing circumstances, the availability of predicted groundwater levels increase the importance of this valuable resource, as an aid in the planning of groundwater resources. For this purpose, data-driven prediction models are widely used in the present day world. M5 model tree (MT) is a popular soft computing method emerging as a promising method for numeric prediction, producing understandable models. The present study discusses the groundwater level predictions using MT employing only the historical groundwater levels from a groundwater monitoring well. The results showed that MT can be successively used for forecasting groundwater levels.

  19. A Fusion Model for CPU Load Prediction in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dayu Xu

    2013-11-01

    Full Text Available Load prediction plays a key role in cost-optimal resource allocation and datacenter energy saving. In this paper, we use real-world traces from Cloud platform and propose a fusion model to forecast the future CPU loads. First, long CPU load time series data are divided into short sequences with same length from the historical data on the basis of cloud control cycle. Then we use kernel fuzzy c-means clustering algorithm to put the subsequences into different clusters. For each cluster, with current load sequence, a genetic algorithm optimized wavelet Elman neural network prediction model is exploited to predict the CPU load in next time interval. Finally, we obtain the optimal cloud computing CPU load prediction results from the cluster and its corresponding predictor with minimum forecasting error. Experimental results show that our algorithm performs better than other models reported in previous works.

  20. Modelling proteins' hidden conformations to predict antibiotic resistance

    Science.gov (United States)

    Hart, Kathryn M.; Ho, Chris M. W.; Dutta, Supratik; Gross, Michael L.; Bowman, Gregory R.

    2016-10-01

    TEM β-lactamase confers bacteria with resistance to many antibiotics and rapidly evolves activity against new drugs. However, functional changes are not easily explained by differences in crystal structures. We employ Markov state models to identify hidden conformations and explore their role in determining TEM's specificity. We integrate these models with existing drug-design tools to create a new technique, called Boltzmann docking, which better predicts TEM specificity by accounting for conformational heterogeneity. Using our MSMs, we identify hidden states whose populations correlate with activity against cefotaxime. To experimentally detect our predicted hidden states, we use rapid mass spectrometric footprinting and confirm our models' prediction that increased cefotaxime activity correlates with reduced Ω-loop flexibility. Finally, we design novel variants to stabilize the hidden cefotaximase states, and find their populations predict activity against cefotaxime in vitro and in vivo. Therefore, we expect this framework to have numerous applications in drug and protein design.

  1. A Hybrid Neural Network Prediction Model of Air Ticket Sales

    Directory of Open Access Journals (Sweden)

    Han-Chen Huang

    2013-11-01

    Full Text Available Air ticket sales revenue is an important source of revenue for travel agencies, and if future air ticket sales revenue can be accurately forecast, travel agencies will be able to advance procurement to achieve a sufficient amount of cost-effective tickets. Therefore, this study applied the Artificial Neural Network (ANN and Genetic Algorithms (GA to establish a prediction model of travel agency air ticket sales revenue. By verifying the empirical data, this study proved that the established prediction model has accurate prediction power, and MAPE (mean absolute percentage error is only 9.11%. The established model can provide business operators with reliable and efficient prediction data as a reference for operational decisions.

  2. E-commerce business model mining and prediction

    Institute of Scientific and Technical Information of China (English)

    Zhou-zhou HE; Zhong-fei ZHANG; Chun-ming CHEN; Zheng-gang WANG

    2015-01-01

    We study the problem of business model mining and prediction in the e-commerce context. Unlike most existing approaches where this is typically formulated as a regression problem or a time-series prediction problem, we take a different formulation to this problem by noting that these existing approaches fail to consider the potential relationships both among the consumers (consumer infl uence) and among the shops (competitions or collaborations). Taking this observation into consideration, we propose a new method for e-commerce business model mining and prediction, called EBMM, which combines regression with community analysis. The challenge is that the links in the network are typically not directly observed, which is addressed by applying information diffusion theory through the consumer-shop network. Extensive evaluations using Alibaba Group e-commerce data demonstrate the promise and superiority of EBMM to the state-of-the-art methods in terms of business model mining and prediction.

  3. Model for Predicting Passage of Invasive Fish Species Through Culverts

    Science.gov (United States)

    Neary, V.

    2010-12-01

    Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.

  4. Some Remarks on CFD Drag Prediction of an Aircraft Model

    Science.gov (United States)

    Peng, S. H.; Eliasson, P.

    Observed in CFD drag predictions for the DLR-F6 aircraft model with various configurations, some issues are addressed. The emphasis is placed on the effect of turbulence modeling and grid resolution. With several different turbulence models, the predicted flow feature around the aircraft is highlighted. It is shown that the prediction of the separation bubble in the wing-body junction is closely related to the inherent modeling mechanism of turbulence production. For the configuration with an additional fairing, which has effectively removed the separation bubble, it is illustrated that the drag prediction may be altered even for attached turbulent boundary layer when different turbulence models are used. Grid sensitivity studies are performed with two groups of subsequently refined grids. It is observed that, in contrast to the lift, the drag prediction is rather sensitive to the grid refinement, as well as to the artificial diffusion added for solving the turbulence transport equation. It is demonstrated that an effective grid refinement should drive the predicted drag components monotonically and linearly converged to a finite value.

  5. Signature prediction for model-based automatic target recognition

    Science.gov (United States)

    Keydel, Eric R.; Lee, Shung W.

    1996-06-01

    The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.

  6. Multi-model ensemble hydrologic prediction and uncertainties analysis

    Directory of Open Access Journals (Sweden)

    S. Jiang

    2014-09-01

    Full Text Available Modelling uncertainties (i.e. input errors, parameter uncertainties and model structural errors inevitably exist in hydrological prediction. A lot of recent attention has focused on these, of which input error modelling, parameter optimization and multi-model ensemble strategies are the three most popular methods to demonstrate the impacts of modelling uncertainties. In this paper the Xinanjiang model, the Hybrid rainfall–runoff model and the HYMOD model were applied to the Mishui Basin, south China, for daily streamflow ensemble simulation and uncertainty analysis. The three models were first calibrated by two parameter optimization algorithms, namely, the Shuffled Complex Evolution method (SCE-UA and the Shuffled Complex Evolution Metropolis method (SCEM-UA; next, the input uncertainty was accounted for by introducing a normally-distributed error multiplier; then, the simulation sets calculated from the three models were combined by Bayesian model averaging (BMA. The results show that both these parameter optimization algorithms generate good streamflow simulations; specifically the SCEM-UA can imply parameter uncertainty and give the posterior distribution of the parameters. Considering the precipitation input uncertainty, the streamflow simulation precision does not improve very much. While the BMA combination not only improves the streamflow prediction precision, it also gives quantitative uncertainty bounds for the simulation sets. The SCEM-UA calculated prediction interval is better than the SCE-UA calculated one. These results suggest that considering the model parameters' uncertainties and doing multi-model ensemble simulations are very practical for streamflow prediction and flood forecasting, from which more precision prediction and more reliable uncertainty bounds can be generated.

  7. Model predictive torque control with an extended prediction horizon for electrical drive systems

    Science.gov (United States)

    Wang, Fengxiang; Zhang, Zhenbin; Kennel, Ralph; Rodríguez, José

    2015-07-01

    This paper presents a model predictive torque control method for electrical drive systems. A two-step prediction horizon is achieved by considering the reduction of the torque ripples. The electromagnetic torque and the stator flux error between predicted values and the references, and an over-current protection are considered in the cost function design. The best voltage vector is selected by minimising the value of the cost function, which aims to achieve a low torque ripple in two intervals. The study is carried out experimentally. The results show that the proposed method achieves good performance in both steady and transient states.

  8. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  9. Validation of Biomarker-based risk prediction models

    OpenAIRE

    Taylor, Jeremy M.G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2008-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Inte...

  10. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  11. Predicting Solar Cycle 25 using Surface Flux Transport Model

    Science.gov (United States)

    Imada, Shinsuke; Iijima, Haruhisa; Hotta, Hideyuki; Shiota, Daiko; Kusano, Kanya

    2017-08-01

    It is thought that the longer-term variations of the solar activity may affect the Earth’s climate. Therefore, predicting the next solar cycle is crucial for the forecast of the “solar-terrestrial environment”. To build prediction schemes for the next solar cycle is a key for the long-term space weather study. Recently, the relationship between polar magnetic field at the solar minimum and next solar activity is intensively discussed. Because we can determine the polar magnetic field at the solar minimum roughly 3 years before the next solar maximum, we may discuss the next solar cycle 3years before. Further, the longer term (~5 years) prediction might be achieved by estimating the polar magnetic field with the Surface Flux Transport (SFT) model. Now, we are developing a prediction scheme by SFT model as a part of the PSTEP (Project for Solar-Terrestrial Environment Prediction) and adapting to the Cycle 25 prediction. The predicted polar field strength of Cycle 24/25 minimum is several tens of percent smaller than Cycle 23/24 minimum. The result suggests that the amplitude of Cycle 25 is weaker than the current cycle. We also try to obtain the meridional flow, differential rotation, and turbulent diffusivity from recent modern observations (Hinode and Solar Dynamics Observatory). These parameters will be used in the SFT models to predict the polar magnetic fields strength at the solar minimum. In this presentation, we will explain the outline of our strategy to predict the next solar cycle and discuss the initial results for Cycle 25 prediction.

  12. Predicting soil acidification trends at Plynlimon using the SAFE model

    Directory of Open Access Journals (Sweden)

    B. Reynolds

    1997-01-01

    Full Text Available The SAFE model has been applied to an acid grassland site, located on base-poor stagnopodzol soils derived from Lower Palaeozoic greywackes. The model predicts that acidification of the soil has occurred in response to increased acid deposition following the industrial revolution. Limited recovery is predicted following the decline in sulphur deposition during the mid to late 1970s. Reducing excess sulphur and NOx deposition in 1998 to 40% and 70% of 1980 levels results in further recovery but soil chemical conditions (base saturation, soil water pH and ANC do not return to values predicted in pre-industrial times. The SAFE model predicts that critical loads (expressed in terms of the (Ca+Mg+K:Alcrit ratio for six vegetation species found in acid grassland communities are not exceeded despite the increase in deposited acidity following the industrial revolution. The relative growth response of selected vegetation species characteristic of acid grassland swards has been predicted using a damage function linking growth to soil solution base cation to aluminium ratio. The results show that very small growth reductions can be expected for 'acid tolerant' plants growing in acid upland soils. For more sensitive species such as Holcus lanatus, SAFE predicts that growth would have been reduced by about 20% between 1951 and 1983, when acid inputs were greatest. Recovery to c. 90% of normal growth (under laboratory conditions is predicted as acidic inputs decline.

  13. Physics-Informed Machine Learning for Predictive Turbulence Modeling: A Priori Assessment of Prediction Confidence

    CERN Document Server

    Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-01-01

    Although Reynolds-Averaged Navier-Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. Recently, a data-driven turbulence modeling approach via machine learning has been proposed to predict the Reynolds stress anisotropy of a given flow based on high fidelity data from closely related flows. In this work, the closeness of different flows is investigated to assess the prediction confidence a priori. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique...

  14. Extended Range Hydrological Predictions: Uncertainty Associated with Model Parametrization

    Science.gov (United States)

    Joseph, J.; Ghosh, S.; Sahai, A. K.

    2016-12-01

    The better understanding of various atmospheric processes has led to improved predictions of meteorological conditions at various temporal scale, ranging from short term which cover a period up to 2 days to long term covering a period of more than 10 days. Accurate prediction of hydrological variables can be done using these predicted meteorological conditions, which would be helpful in proper management of water resources. Extended range hydrological simulation includes the prediction of hydrological variables for a period more than 10 days. The main sources of uncertainty in hydrological predictions include the uncertainty in the initial conditions, meteorological forcing and model parametrization. In the present study, the Extended Range Prediction developed for India for monsoon by Indian Institute of Tropical Meteorology (IITM), Pune is used as meteorological forcing for the Variable Infiltration Capacity (VIC) model. Sensitive hydrological parameters, as derived from literature, along with a few vegetation parameters are assumed to be uncertain and 1000 random values are generated given their prescribed ranges. Uncertainty bands are generated by performing Monte-Carlo Simulations (MCS) for the generated sets of parameters and observed meteorological forcings. The basins with minimum human intervention, within the Indian Peninsular region, are identified and validation of results are carried out using the observed gauge discharge. Further, the uncertainty bands are generated for the extended range hydrological predictions by performing MCS for the same set of parameters and extended range meteorological predictions. The results demonstrate the uncertainty associated with the model parametrisation for the extended range hydrological simulations. Keywords: Extended Range Prediction, Variable Infiltration Capacity model, Monte Carlo Simulation.

  15. Predictive modeling of coral disease distribution within a reef system.

    Directory of Open Access Journals (Sweden)

    Gareth J Williams

    Full Text Available Diseases often display complex and distinct associations with their environment due to differences in etiology, modes of transmission between hosts, and the shifting balance between pathogen virulence and host resistance. Statistical modeling has been underutilized in coral disease research to explore the spatial patterns that result from this triad of interactions. We tested the hypotheses that: 1 coral diseases show distinct associations with multiple environmental factors, 2 incorporating interactions (synergistic collinearities among environmental variables is important when predicting coral disease spatial patterns, and 3 modeling overall coral disease prevalence (the prevalence of multiple diseases as a single proportion value will increase predictive error relative to modeling the same diseases independently. Four coral diseases: Porites growth anomalies (PorGA, Porites tissue loss (PorTL, Porites trematodiasis (PorTrem, and Montipora white syndrome (MWS, and their interactions with 17 predictor variables were modeled using boosted regression trees (BRT within a reef system in Hawaii. Each disease showed distinct associations with the predictors. Environmental predictors showing the strongest overall associations with the coral diseases were both biotic and abiotic. PorGA was optimally predicted by a negative association with turbidity, PorTL and MWS by declines in butterflyfish and juvenile parrotfish abundance respectively, and PorTrem by a modal relationship with Porites host cover. Incorporating interactions among predictor variables contributed to the predictive power of our models, particularly for PorTrem. Combining diseases (using overall disease prevalence as the model response, led to an average six-fold increase in cross-validation predictive deviance over modeling the diseases individually. We therefore recommend coral diseases to be modeled separately, unless known to have etiologies that respond in a similar manner to

  16. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  17. Precise methods for conducted EMI modeling,analysis,and prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0―10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  18. Precise methods for conducted EMI modeling,analysis, and prediction

    Institute of Scientific and Technical Information of China (English)

    MA WeiMing; ZHAO ZhiHua; MENG Jin; PAN QiJun; ZHANG Lei

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0-10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  19. Predictive Control, Competitive Model Business Planning, and Innovation ERP

    DEFF Research Database (Denmark)

    Nourani, Cyrus F.; Lauth, Codrina

    2015-01-01

    is not viewed as the sum of its component elements, but the product of their interactions. The paper starts with introducing a systems approach to business modeling. A competitive business modeling technique, based on the author's planning techniques is applied. Systemic decisions are based on common......New optimality principles are put forth based on competitive model business planning. A Generalized MinMax local optimum dynamic programming algorithm is presented and applied to business model computing where predictive techniques can determine local optima. Based on a systems model an enterprise...... Loops, are applied to complex management decisions. Predictive modeling specifics are briefed. A preliminary optimal game modeling technique is presented in brief with applications to innovation and R&D management. Conducting gap and risk analysis can assist with this process. Example application areas...

  20. Predicting nucleosome positioning using a duration Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Widom Jonathan

    2010-06-01

    Full Text Available Abstract Background The nucleosome is the fundamental packing unit of DNAs in eukaryotic cells. Its detailed positioning on the genome is closely related to chromosome functions. Increasing evidence has shown that genomic DNA sequence itself is highly predictive of nucleosome positioning genome-wide. Therefore a fast software tool for predicting nucleosome positioning can help understanding how a genome's nucleosome organization may facilitate genome function. Results We present a duration Hidden Markov model for nucleosome positioning prediction by explicitly modeling the linker DNA length. The nucleosome and linker models trained from yeast data are re-scaled when making predictions for other species to adjust for differences in base composition. A software tool named NuPoP is developed in three formats for free download. Conclusions Simulation studies show that modeling the linker length distribution and utilizing a base composition re-scaling method both improve the prediction of nucleosome positioning regarding sensitivity and false discovery rate. NuPoP provides a user-friendly software tool for predicting the nucleosome occupancy and the most probable nucleosome positioning map for genomic sequences of any size. When compared with two existing methods, NuPoP shows improved performance in sensitivity.

  1. Experience-based model predictive control using reinforcement learning

    NARCIS (Netherlands)

    Negenborn, R.R.; De Schutter, B.; Wiering, M.A.; Hellendoorn, J.

    2004-01-01

    Model predictive control (MPC) is becoming an increasingly popular method to select actions for controlling dynamic systems. TraditionallyMPC uses a model of the system to be controlled and a performance function to characterize the desired behavior of the system. The MPC agent finds actions over a

  2. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields th

  3. Katz model prediction of Caenorhabditis elegans mutagenesis on STS-42

    Science.gov (United States)

    Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Badhwar, Gautam D.

    1992-01-01

    Response parameters that describe the production of recessive lethal mutations in C. elegans from ionizing radiation are obtained with the Katz track structure model. The authors used models of the space radiation environment and radiation transport to predict and discuss mutation rates for C. elegans on the IML-1 experiment aboard STS-42.

  4. A model to predict the sound reflection from forests

    NARCIS (Netherlands)

    Wunderli, J.M.; Salomons, E.M.

    2009-01-01

    A model is presented to predict the reflection of sound at forest edges. A single tree is modelled as a vertical cylinder. For the reflection at a cylinder an analytical solution is given based on the theory of scattering of spherical waves. The entire forest is represented by a line of cylinders

  5. Atmospheric modelling for seasonal prediction at the CSIR

    CSIR Research Space (South Africa)

    Landman, WA

    2014-10-01

    Full Text Available by observed monthly sea-surface temperature (SST) and sea-ice fields. The AGCM is the conformal-cubic atmospheric model (CCAM) administered by the Council for Scientific and Industrial Research. Since the model is forced with observed rather than predicted...

  6. A model to predict the sound reflection from forests

    NARCIS (Netherlands)

    Wunderli, J.M.; Salomons, E.M.

    2009-01-01

    A model is presented to predict the reflection of sound at forest edges. A single tree is modelled as a vertical cylinder. For the reflection at a cylinder an analytical solution is given based on the theory of scattering of spherical waves. The entire forest is represented by a line of cylinders pl

  7. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a network

  8. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...... masks degenerate to a noise vocoder....

  9. Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat

    NARCIS (Netherlands)

    Fels, van der H.J.

    2014-01-01

    The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields

  10. A Climate System Model, Numerical Simulation and Climate Predictability

    Institute of Scientific and Technical Information of China (English)

    ZENG Qingcun; WANG Huijun; LIN Zhaohui; ZHOU Guangqing; YU Yongqiang

    2007-01-01

    @@ The implementation of the project has lasted for more than 20 years. As a result, the following key innovative achievements have been obtained, ranging from the basic theory of climate dynamics, numerical model development and its related computational theory to the dynamical climate prediction using the climate system models:

  11. Prediction horizon effects on stochastic modelling hints for neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Drossu, R.; Obradovic, Z. [Washington State Univ., Pullman, WA (United States)

    1995-12-31

    The objective of this paper is to investigate the relationship between stochastic models and neural network (NN) approaches to time series modelling. Experiments on a complex real life prediction problem (entertainment video traffic) indicate that prior knowledge can be obtained through stochastic analysis both with respect to an appropriate NN architecture as well as to an appropriate sampling rate, in the case of a prediction horizon larger than one. An improvement of the obtained NN predictor is also proposed through a bias removal post-processing, resulting in much better performance than the best stochastic model.

  12. Three-model ensemble wind prediction in southern Italy

    Science.gov (United States)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  13. Predicting the Yield Stress of SCC using Materials Modelling

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Hasholt, Marianne Tange; Pade, Claus

    2005-01-01

    A conceptual model for predicting the Bingham rheological parameter yield stress of SCC has been established. The model used here is inspired by previous work of Oh et al. (1), predicting that the yield stress of concrete relative to the yield stress of paste is a function of the relative thickness...... of excess paste around the aggregate. The thickness of excess paste is itself a function of particle shape, particle size distribution, and particle packing. Seven types of SCC were tested at four different excess paste contents in order to verify the conceptual model. Paste composition and aggregate shape...... and distribution were varied between SCC types. The results indicate that yield stress of SCC may be predicted using the model....

  14. Stability of theoretical model for catastrophic weather prediction

    Institute of Scientific and Technical Information of China (English)

    SHI Wei-hui; WANG Yue-peng

    2007-01-01

    Stability related to theoretical model for catastrophic weather prediction,which includes non-hydrostatic perfect elastic model and anelastic model, is discussed and analyzed in detail. It is proved that non-hydrostatic perfect elastic equations set is stable in the class of infinitely differentiable function. However, for the anelastic equations set, its continuity equation is changed in form because of the particular hypothesis for fluid, so "the matching consisting of both viscosity coefficient and incompressible assumption" appears, thereby the most important equations set of this class in practical prediction shows the same instability in topological property as Navier-Stokes equation,which should be avoided first in practical numerical prediction. In light of this, the referenced suggestions to amend the applied model are finally presented.

  15. Comparison of tropospheric scintillation prediction models of the Indonesian climate

    Science.gov (United States)

    Chen, Cheng Yee; Singh, Mandeep Jit

    2014-12-01

    Tropospheric scintillation is a phenomenon that will cause signal degradation in satellite communication with low fade margin. Few studies of scintillation have been conducted in tropical regions. To analyze tropospheric scintillation, we obtain data from a satellite link installed at Bandung, Indonesia, at an elevation angle of 64.7° and a frequency of 12.247 GHz from 1999 to 2000. The data are processed and compared with the predictions of several well-known scintillation prediction models. From the analysis, we found that the ITU-R model gives the lowest error rate when predicting the scintillation intensity for fade at 4.68%. However, the model should be further tested using data from higher-frequency bands, such as the K and Ka bands, to verify the accuracy of the model.

  16. [Predicting suicide or predicting the unpredictable in an uncertain world: Reinforcement Learning Model-Based analysis].

    Science.gov (United States)

    Desseilles, Martin

    2012-01-01

    In general, it appears that the suicidal act is highly unpredictable with the current scientific means available. In this article, the author submits the hypothesis that predicting suicide is complex because it results in predicting a choice, in itself unpredictable. The article proposes a Reinforcement learning model-based analysis. In this model, we integrate on the one hand, four ascending modulatory neurotransmitter systems (acetylcholine, noradrenalin, serotonin, and dopamine) with their regions of respective projections and afferences, and on the other hand, various observations of brain imaging identified until now in the suicidal process.

  17. OBSERVATION OF ENERGY DISSIPATION PEAK IN POLYSTYRENE MELT ABOVE Tg

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper two different kinds of dynamic mechanical techniques (inversed torsion pendulum and energy dissipation apparatus) were used to study the dynamic behavior of atactic monodisperse polystyrene above glass transition.The plots of energy dissipation versus temperature were presented for two atactic polystyrene samples. An apparent energy dissipation peak occurred above Tg in each plot measured by the inversed torsion pendulum, and simultaneously the sample was found to flow assuredly at the moment. To exclude the influence of the flow and demonstrate there was a peak indeed above Tg, the energy dissipation apparatus was used, in which the samples were put into a cup. An obvious peak appeared,and it was in agreement with the peak observed by the inversed torsion pendulum. On basis of the results measured by the two kinds of apparatus, a conclusion is drawn that a peak occurrs above Tg, which gives a manifestation for the existence of the liquid-liquid transition.

  18. Statistical characteristics of irreversible predictability time in regional ocean models

    Directory of Open Access Journals (Sweden)

    P. C. Chu

    2005-01-01

    Full Text Available Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF of the irreversible predictability time (IPT (called τ-PDF computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks and intermediate (up to two months predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain are not outliers and share the same statistics as a whole ensemble of predictions.

  19. Determining the prediction limits of models and classifiers with applications for disruption prediction in JET

    Science.gov (United States)

    Murari, A.; Peluso, E.; Vega, J.; Gelfusa, M.; Lungaroni, M.; Gaudio, P.; Martínez, F. J.; Contributors, JET

    2017-01-01

    Understanding the many aspects of tokamak physics requires the development of quite sophisticated models. Moreover, in the operation of the devices, prediction of the future evolution of discharges can be of crucial importance, particularly in the case of the prediction of disruptions, which can cause serious damage to various parts of the machine. The determination of the limits of predictability is therefore an important issue for modelling, classifying and forecasting. In all these cases, once a certain level of performance has been reached, the question typically arises as to whether all the information available in the data has been exploited, or whether there are still margins for improvement of the tools being developed. In this paper, a theoretical information approach is proposed to address this issue. The excellent properties of the developed indicator, called the prediction factor (PF), have been proved with the help of a series of numerical tests. Its application to some typical behaviour relating to macroscopic instabilities in tokamaks has shown very positive results. The prediction factor has also been used to assess the performance of disruption predictors running in real time in the JET system, including the one systematically deployed in the feedback loop for mitigation purposes. The main conclusion is that the most advanced predictors basically exploit all the information contained in the locked mode signal on which they are based. Therefore, qualitative improvements in disruption prediction performance in JET would need the processing of additional signals, probably profiles.

  20. A predictive model of music preference using pairwise comparisons

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan

    2012-01-01

    Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...... function and evaluate the predictive performance on a novel dataset. In a recommendation style setting we obtain a leave-one-out accuracy of 74% compared to 50% with random predictions, showing potential for further refinement and evaluation....

  1. The ARIC predictive model reliably predicted risk of type II diabetes in Asian populations

    Directory of Open Access Journals (Sweden)

    Chin Calvin

    2012-04-01

    Full Text Available Abstract Background Identification of high-risk individuals is crucial for effective implementation of type 2 diabetes mellitus prevention programs. Several studies have shown that multivariable predictive functions perform as well as the 2-hour post-challenge glucose in identifying these high-risk individuals. The performance of these functions in Asian populations, where the rise in prevalence of type 2 diabetes mellitus is expected to be the greatest in the next several decades, is relatively unknown. Methods Using data from three Asian populations in Singapore, we compared the performance of three multivariate predictive models in terms of their discriminatory power and calibration quality: the San Antonio Health Study model, Atherosclerosis Risk in Communities model and the Framingham model. Results The San Antonio Health Study and Atherosclerosis Risk in Communities models had better discriminative powers than using only fasting plasma glucose or the 2-hour post-challenge glucose. However, the Framingham model did not perform significantly better than fasting glucose or the 2-hour post-challenge glucose. All published models suffered from poor calibration. After recalibration, the Atherosclerosis Risk in Communities model achieved good calibration, the San Antonio Health Study model showed a significant lack of fit in females and the Framingham model showed a significant lack of fit in both females and males. Conclusions We conclude that adoption of the ARIC model for Asian populations is feasible and highly recommended when local prospective data is unavailable.

  2. Models predicting non-sentinel node involvement also predict for regional recurrence in breast cancer patients without axillary treatment

    NARCIS (Netherlands)

    Pepels, M.J.; Vestjens, J.H.; Boer, M. de; Bult, P.; Dijck, J.A.A.M. van; Menke-Pluijmers, M.; Diest, P.J. van; Borm, G.; Tjan-Heijnen, V.C.

    2013-01-01

    BACKGROUND: Non-SN prediction models are frequently used in clinical decision making to identify patients that may not need axillary treatment, but these models still need to be validated by follow-up data. Our purpose was the validation of non-sentinel node (SN) prediction models in predicting

  3. Hierarchical Neural Regression Models for Customer Churn Prediction

    Directory of Open Access Journals (Sweden)

    Golshan Mohammadi

    2013-01-01

    Full Text Available As customers are the main assets of each industry, customer churn prediction is becoming a major task for companies to remain in competition with competitors. In the literature, the better applicability and efficiency of hierarchical data mining techniques has been reported. This paper considers three hierarchical models by combining four different data mining techniques for churn prediction, which are backpropagation artificial neural networks (ANN, self-organizing maps (SOM, alpha-cut fuzzy c-means (α-FCM, and Cox proportional hazards regression model. The hierarchical models are ANN + ANN + Cox, SOM + ANN + Cox, and α-FCM + ANN + Cox. In particular, the first component of the models aims to cluster data in two churner and nonchurner groups and also filter out unrepresentative data or outliers. Then, the clustered data as the outputs are used to assign customers to churner and nonchurner groups by the second technique. Finally, the correctly classified data are used to create Cox proportional hazards model. To evaluate the performance of the hierarchical models, an Iranian mobile dataset is considered. The experimental results show that the hierarchical models outperform the single Cox regression baseline model in terms of prediction accuracy, Types I and II errors, RMSE, and MAD metrics. In addition, the α-FCM + ANN + Cox model significantly performs better than the two other hierarchical models.

  4. TSC STUDIES ON SUB-Tg STORAGE OF POLYETHYLENE TEREPHTHALATE

    Institute of Scientific and Technical Information of China (English)

    LI Wei; TONG Gang; ZHOU Yiqin; QI Zongneng

    1987-01-01

    The Thermally Stimulated Current (TSC) spectra of a series of Sub-Tg annealed polyethylene terephthalate (PET) specimens have been measured. It is found that there is only one peak at 80℃ above room temperature, which related to the thermo-relaxation of frozen-in dipoles. The activation energy of such dipole motion has been calculated. The relation between the maximum current and the storage time can be explained by the free volume theory and agrees with the results from the excess thermodynamic properties. Compared with Differential Scanning Calorimeter (DSC) and tensile stress-strain method, TSC is a simpler and more sensitive method in studying Sub-Tg annealed polymers.

  5. Predicting nucleic acid binding interfaces from structural models of proteins.

    Science.gov (United States)

    Dror, Iris; Shazman, Shula; Mukherjee, Srayanta; Zhang, Yang; Glaser, Fabian; Mandel-Gutfreund, Yael

    2012-02-01

    The function of DNA- and RNA-binding proteins can be inferred from the characterization and accurate prediction of their binding interfaces. However, the main pitfall of various structure-based methods for predicting nucleic acid binding function is that they are all limited to a relatively small number of proteins for which high-resolution three-dimensional structures are available. In this study, we developed a pipeline for extracting functional electrostatic patches from surfaces of protein structural models, obtained using the I-TASSER protein structure predictor. The largest positive patches are extracted from the protein surface using the patchfinder algorithm. We show that functional electrostatic patches extracted from an ensemble of structural models highly overlap the patches extracted from high-resolution structures. Furthermore, by testing our pipeline on a set of 55 known nucleic acid binding proteins for which I-TASSER produces high-quality models, we show that the method accurately identifies the nucleic acids binding interface on structural models of proteins. Employing a combined patch approach we show that patches extracted from an ensemble of models better predicts the real nucleic acid binding interfaces compared with patches extracted from independent models. Overall, these results suggest that combining information from a collection of low-resolution structural models could be a valuable approach for functional annotation. We suggest that our method will be further applicable for predicting other functional surfaces of proteins with unknown structure. Copyright © 2011 Wiley Periodicals, Inc.

  6. An evaporation duct prediction model coupled with the MM5

    Institute of Scientific and Technical Information of China (English)

    JIAO Lin; ZHANG Yonggang

    2015-01-01

    Evaporation duct is an abnormal refractive phenomenon in the marine atmosphere boundary layer. It has been generally accepted that the evaporation duct prominently affects the performance of the electronic equipment over the sea because of its wide distribution and frequent occurrence. It has become a research focus of the navies all over the world. At present, the diagnostic models of the evaporation duct are all based on the Monin-Obukhov similarity theory, with only differences in the flux and character scale calculations in the surface layer. These models are applicable to the stationary and uniform open sea areas without considering the alongshore effect. This paper introduces the nonlinear factorav and the gust wind itemwg into the Babin model, and thus extends the evaporation duct diagnostic model to the offshore area under extremely low wind speed. In addition, an evaporation duct prediction model is designed and coupled with the fifth generation mesoscale model (MM5). The tower observational data and radar data at the Pingtan island of Fujian Province on May 25–26, 2002 were used to validate the forecast results. The outputs of the prediction model agree with the observations from 0 to 48 h. The relative error of the predicted evaporation duct height is 19.3% and the prediction results are consistent with the radar detection.

  7. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  8. Predictive Models of Li-ion Battery Lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Kandler; Wood, Eric; Santhanagopalan, Shriram; Kim, Gi-heon; Shi, Ying; Pesaran, Ahmad

    2015-06-15

    It remains an open question how best to predict real-world battery lifetime based on accelerated calendar and cycle aging data from the laboratory. Multiple degradation mechanisms due to (electro)chemical, thermal, and mechanical coupled phenomena influence Li-ion battery lifetime, each with different dependence on time, cycling and thermal environment. The standardization of life predictive models would benefit the industry by reducing test time and streamlining development of system controls.

  9. Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones

    DEFF Research Database (Denmark)

    Borly, L; Anderson, I B; Bardram, Linda

    1999-01-01

    BACKGROUND: After cholecystectomy for symptomatic gallstone disease 20%-30% of the patients continue to have abdominal pain. The aim of this study was to investigate whether preoperative variables could predict the symptomatic outcome after cholecystectomy. METHODS: One hundred and two patients...... and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...

  10. A predictive fatigue life model for anodized 7050 aluminium alloy

    OpenAIRE

    Chaussumier, Michel; Mabru, Catherine; Shahzad, Majid; Chieragatti, Rémy; Rezaï-Aria, Farhad

    2013-01-01

    International audience; The objective of this study is to predict fatigue life of anodized 7050 aluminum alloy specimens. In the case of anodized 7050-T7451 alloy, fractographic observations of fatigue tested specimens showed that pickling pits were the predominant sites for crack nucleation and subsequent failure. It has been shown that fatigue failure was favored by the presence of multiple cracks. From these experimental results, a fatigue life predictive model has been developed including...

  11. Support vector machine-based multi-model predictive control

    Institute of Scientific and Technical Information of China (English)

    Zhejing BA; Youxian SUN

    2008-01-01

    In this paper,a support vector machine-based multi-model predictive control is proposed,in which SVM classification combines well with SVM regression.At first,each working environment is modeled by SVM regression and the support vector machine network-based model predictive control(SVMN-MPC)algorithm corresponding to each environment is developed,and then a multi-class SVM model is established to recognize multiple operating conditions.As for control,the current environment is identified by the multi-class SVM model and then the corresponding SVMN.MPCcontroller is activated at each sampling instant.The proposed modeling,switching and controller design is demonstrated in simulation results.

  12. Robust Model Predictive Control of a Wind Turbine

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    In this work the problem of robust model predictive control (robust MPC) of a wind turbine in the full load region is considered. A minimax robust MPC approach is used to tackle the problem. Nonlinear dynamics of the wind turbine are derived by combining blade element momentum (BEM) theory...... and first principle modeling of the turbine flexible structure. Thereafter the nonlinear model is linearized using Taylor series expansion around system operating points. Operating points are determined by effective wind speed and an extended Kalman filter (EKF) is employed to estimate this. In addition...... of the uncertain system is employed and a norm-bounded uncertainty model is used to formulate a minimax model predictive control. The resulting optimization problem is simplified by semidefinite relaxation and the controller obtained is applied on a full complexity, high fidelity wind turbine model. Finally...

  13. TgCDPK3 Regulates Calcium-Dependent Egress of Toxoplasma gondii from Host Cells

    Science.gov (United States)

    McCoy, James M.; Whitehead, Lachlan; van Dooren, Giel G.; Tonkin, Christopher J.

    2012-01-01

    The phylum Apicomplexa comprises a group of obligate intracellular parasites of broad medical and agricultural significance, including Toxoplasma gondii and the malaria-causing Plasmodium spp. Key to their parasitic lifestyle is the need to egress from an infected cell, actively move through tissue, and reinvade another cell, thus perpetuating infection. Ca2+-mediated signaling events modulate key steps required for host cell egress, invasion and motility, including secretion of microneme organelles and activation of the force-generating actomyosin-based motor. Here we show that a plant-like Calcium-Dependent Protein Kinase (CDPK) in T. gondii, TgCDPK3, which localizes to the inner side of the plasma membrane, is not essential to the parasite but is required for optimal in vitro growth. We demonstrate that TgCDPK3, the orthologue of Plasmodium PfCDPK1, regulates Ca2+ ionophore- and DTT-induced host cell egress, but not motility or invasion. Furthermore, we show that targeting to the inner side of the plasma membrane by dual acylation is required for its activity. Interestingly, TgCDPK3 regulates microneme secretion when parasites are intracellular but not extracellular. Indeed, the requirement for TgCDPK3 is most likely determined by the high K+ concentration of the host cell. Our results therefore suggest that TgCDPK3's role differs from that previously hypothesized, and rather support a model where this kinase plays a role in rapidly responding to Ca2+ signaling in specific ionic environments to upregulate multiple processes required for gliding motility. PMID:23226109

  14. A Prediction Model of MF Radiation in Environmental Assessment

    Institute of Scientific and Technical Information of China (English)

    HE-SHAN GE; YAN-FENG HONG

    2006-01-01

    Objective To predict the impact of MF radiation on human health.Methods The vertical distribution of field intensity was estimated by analogism on the basis of measured values from simulation measurement. Results A kind of analogism on the basis of geometric proportion decay pattern is put forward in the essay. It showed that with increasing of height the field intensity increased according to geometric proportion law. Conclusion This geometric proportion prediction model can be used to estimate the impact of MF radiation on inhabited environment, and can act as a reference pattern in predicting the environmental impact level of MF radiation.

  15. A Comprehensive Behavioral Test Battery to Assess Learning and Memory in 129S6/Tg2576 Mice.

    Science.gov (United States)

    Wolf, Andrea; Bauer, Björn; Abner, Erin L; Ashkenazy-Frolinger, Tal; Hartz, Anika M S

    2016-01-01

    Transgenic Tg2576 mice overexpressing human amyloid precursor protein (hAPP) are a widely used Alzheimer's disease (AD) mouse model to evaluate treatment effects on amyloid beta (Aβ) pathology and cognition. Tg2576 mice on a B6;SJL background strain carry a recessive rd1 mutation that leads to early retinal degeneration and visual impairment in homozygous carriers. This can impair performance in behavioral tests that rely on visual cues, and thus, affect study results. Therefore, B6;SJL/Tg2576 mice were systematically backcrossed with 129S6/SvEvTac mice resulting in 129S6/Tg2576 mice that lack the rd1 mutation. 129S6/Tg2576 mice do not develop retinal degeneration but still show Aβ accumulation in the brain that is comparable to the original B6;SJL/Tg2576 mouse. However, comprehensive studies on cognitive decline in 129S6/Tg2576 mice are limited. In this study, we used two dementia mouse models on a 129S6 background--scopolamine-treated 129S6/SvEvTac mice (3-5 month-old) and transgenic 129S6/Tg2576 mice (11-13 month-old)-to establish a behavioral test battery for assessing learning and memory. The test battery consisted of five tests to evaluate different aspects of cognitive impairment: a Y-Maze forced alternation task, a novel object recognition test, the Morris water maze, the radial arm water maze, and a Y-maze spontaneous alternation task. We first established this behavioral test battery with the scopolamine-induced dementia model using 129S6/SvEvTac mice and then evaluated 129S6/Tg2576 mice using the same testing protocol. Both models showed distinctive patterns of cognitive impairment. Together, the non-invasive behavioral test battery presented here allows detecting cognitive impairment in scopolamine-treated 129S6/SvEvTac mice and in transgenic 129S6/Tg2576 mice. Due to the modular nature of this test battery, more behavioral tests, e.g. invasive assays to gain additional cognitive information, can easily be added.

  16. Community monitoring for youth violence surveillance: testing a prediction model.

    Science.gov (United States)

    Henry, David B; Dymnicki, Allison; Kane, Candice; Quintana, Elena; Cartland, Jenifer; Bromann, Kimberly; Bhatia, Shaun; Wisnieski, Elise

    2014-08-01

    Predictive epidemiology is an embryonic field that involves developing informative signatures for disorder and tracking them using surveillance methods. Through such efforts assistance can be provided to the planning and implementation of preventive interventions. Believing that certain minor crimes indicative of gang activity are informative signatures for the emergence of serious youth violence in communities, in this study we aim to predict outbreaks of violence in neighborhoods from pre-existing levels and changes in reports of minor offenses. We develop a prediction equation that uses publicly available neighborhood-level data on disorderly conduct, vandalism, and weapons violations to predict neighborhoods likely to have increases in serious violent crime. Data for this study were taken from the Chicago Police Department ClearMap reporting system, which provided data on index and non-index crimes for each of the 844 Chicago census tracts. Data were available in three month segments for a single year (fall 2009, winter, spring, and summer 2010). Predicted change in aggravated battery and overall violent crime correlated significantly with actual change. The model was evaluated by comparing alternative models using randomly selected training and test samples, producing favorable results with reference to overfitting, seasonal variation, and spatial autocorrelation. A prediction equation based on winter and spring levels of the predictors had area under the curve ranging from .65 to .71 for aggravated battery, and .58 to .69 for overall violent crime. We discuss future development of such a model and its potential usefulness in violence prevention and community policing.

  17. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  18. Charge transport model to predict intrinsic reliability for dielectric materials

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  19. In silico modeling to predict drug-induced phospholipidosis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov; Sadrieh, Nakissa

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.

  20. A novel mutation in GJB1 (c.212T>G) in a Chinese family with X-linked Charcot-Marie-Tooth disease.

    Science.gov (United States)

    Xiao, Fei; Tan, Jia-ze; Zhang, Xu; Wang, Xue-Feng

    2015-03-01

    Gap junction protein beta 1 (GJB1) gene mutations lead to X-linked Charcot-Marie-Tooth (CMTX) disease. We investigated a Chinese family with CMTX and identified a novel GJB1 point mutation. Clinical and electrophysiological features of the pedigree were examined, and sequence alterations of the coding region of GJB1 that encode connexin32 were determined by direct sequencing. Sequence alignment of the mutation site was performed using Clustal W. Mutation effects were analysed using PolyPhen-2, SIFT and Mutation Taster software. The three-dimensional structures of the mutant and wild-type proteins were predicted by modeling with SWISS MODEL online software. The affected family members displayed typical Charcot-Marie-Tooth phenotypes, but phenotypic heterogeneity was observed. Nerve conduction velocities of all affected patients were slow. Sequencing of GJB1 revealed a heterozygous T>G missense mutation at nucleotide 212 in the proband, the proband's mother and the proband's daughter. The affected male sibling of the proband displayed a hemizygous missense mutation with T>G transition at the identical position on the GJB1 gene. This mutation resulted in an amino acid change from isoleucine to serine that was predicted to lead to tertiary structural alterations that would disrupt the function of the GJB1 protein. A novel point mutation in GJB1 was detected, expanding the spectrum of GJB1 mutations known to be associated with CMTX.

  1. Should we believe model predictions of future climate change? (Invited)

    Science.gov (United States)

    Knutti, R.

    2009-12-01

    As computers get faster and our understanding of the climate system improves, climate models to predict the future are getting more complex by including more and more processes, and they are run at higher and higher resolution to resolve more of the small scale processes. As a result, some of the simulated features and structures, e.g. ocean eddies or tropical cyclones look surprisingly real. But are these deceptive? A pattern can look perfectly real but be in the wrong place. So can the current global models really provide the kind of information on local scales and on the quantities (e.g. extreme events) that the decision maker would need to know to invest for example in adaptation? A closer look indicates that evaluating skill of climate models and quantifying uncertainties in predictions is very difficult. This presentation shows that while models are improving in simulating the climate features we observe (e.g. the present day mean state, or the El Nino Southern Oscillation), the spread from multiple models in predicting future changes is often not decreasing. The main problem is that (unlike with weather forecasts for example) we cannot evaluate the model on a prediction (for example for the year 2100) and we have to use the present, or past changes as metrics of skills. But there are infinite ways of testing a model, and many metrics used to test models do not clearly relate to the prediction. Therefore there is little agreement in the community on metrics to separate ‘good’ and ‘bad’ models, and there is a concern that model development, evaluation and posterior weighting or ranking of models are all using the same datasets. While models are continuously improving in representing what we believe to be the key processes, many models also share ideas, parameterizations or even pieces of model code. The current models can therefore not be considered independent. Robustness of a model simulated result is often interpreted as increasing the confidence

  2. Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program

    Science.gov (United States)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.

    2017-01-01

    Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.

  3. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  4. Model for Predicting End User Web Page Response Time

    CERN Document Server

    Nagarajan, Sathya Narayanan

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we d...

  5. Mantis: Predicting System Performance through Program Analysis and Modeling

    CERN Document Server

    Chun, Byung-Gon; Lee, Sangmin; Maniatis, Petros; Naik, Mayur

    2010-01-01

    We present Mantis, a new framework that automatically predicts program performance with high accuracy. Mantis integrates techniques from programming language and machine learning for performance modeling, and is a radical departure from traditional approaches. Mantis extracts program features, which are information about program execution runs, through program instrumentation. It uses machine learning techniques to select features relevant to performance and creates prediction models as a function of the selected features. Through program analysis, it then generates compact code slices that compute these feature values for prediction. Our evaluation shows that Mantis can achieve more than 93% accuracy with less than 10% training data set, which is a significant improvement over models that are oblivious to program features. The system generates code slices that are cheap to compute feature values.

  6. Meteorological Drought Prediction Using a Multi-Model Ensemble Approach

    Science.gov (United States)

    Chen, L.; Mo, K. C.; Zhang, Q.; Huang, J.

    2013-12-01

    In the United States, drought is among the costliest natural hazards, with an annual average of 6 billion dollars in damage. Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Started in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the National Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the meteorological drought predictability using the retrospective NMME forecasts for the period from 1982 to 2010. Before predicting SPI, monthly-mean precipitation (P) forecasts from each model were bias corrected and spatially downscaled (BCSD) to regional grids of 0.5-degree resolution over the contiguous United States based on the probability distribution functions derived from the hindcasts. The corrected P forecasts were then appended to the CPC Unified Precipitation Analysis to form a P time series for computing 3-month and 6-month SPIs. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation and root-mean-square errors against the observations, are used to evaluate forecast skill. For P forecasts, errors vary among models and skill generally is low after the second month. All model P forecasts have higher skill in winter and lower skill in summer. In wintertime, BCSD improves both P and SPI forecast skill. Most improvements are over the western mountainous regions and along the Great Lake. Overall, SPI predictive skill is regionally and seasonally dependent. The six-month SPI forecasts are skillful out to four months. For

  7. Consumer Choice Prediction: Artificial Neural Networks versus Logistic Models

    Directory of Open Access Journals (Sweden)

    Christopher Gan

    2005-01-01

    Full Text Available Conventional econometric models, such as discriminant analysis and logistic regression have been used to predict consumer choice. However, in recent years, there has been a growing interest in applying artificial neural networks (ANN to analyse consumer behaviour and to model the consumer decision-making process. The purpose of this paper is to empirically compare the predictive power of the probability neural network (PNN, a special class of neural networks and a MLFN with a logistic model on consumers’ choices between electronic banking and non-electronic banking. Data for this analysis was obtained through a mail survey sent to 1,960 New Zealand households. The questionnaire gathered information on the factors consumers’ use to decide between electronic banking versus non-electronic banking. The factors include service quality dimensions, perceived risk factors, user input factors, price factors, service product characteristics and individual factors. In addition, demographic variables including age, gender, marital status, ethnic background, educational qualification, employment, income and area of residence are considered in the analysis. Empirical results showed that both ANN models (MLFN and PNN exhibit a higher overall percentage correct on consumer choice predictions than the logistic model. Furthermore, the PNN demonstrates to be the best predictive model since it has the highest overall percentage correct and a very low percentage error on both Type I and Type II errors.

  8. Mathematical models for predicting indoor air quality from smoking activity.

    Science.gov (United States)

    Ott, W R

    1999-05-01

    Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balance model and its application to predicting indoor pollutant concentrations from cigarette smoke and derives the time-averaged version of the model from the basic laws of conservation of mass. A simple table is provided of computed respirable particulate concentrations for any indoor location for which the active smoking count, volume, and concentration decay rate (deposition rate combined with air exchange rate) are known. Using the indoor ventilatory air exchange rate causes slightly higher indoor concentrations and therefore errs on the side of protecting health, since it excludes particle deposition effects, whereas using the observed particle decay rate gives a more accurate prediction of indoor concentrations. This table permits easy comparisons of indoor concentrations with air quality guidelines and indoor standards for different combinations of active smoking counts and air exchange rates. The published literature on mathematical models of environmental tobacco smoke also is reviewed and indicates that these models generally give good agreement between predicted concentrations and actual indoor measurements.

  9. Testing the Predictions of the Universal Structured GRB Jet Model

    CERN Document Server

    Nakar, E; Guetta, D; Nakar, Ehud; Granot, Jonathan; Guetta, Dafne

    2004-01-01

    The two leading models for the structure of GRB jets are (1) the uniform jet model, where the energy per solid angle, $\\epsilon$, is roughly constant within some finite half-opening angle, $\\theta_j$, and sharply drops outside of $\\theta_j$, and (2) the universal structured jet (USJ) model, where all GRB jets are intrinsically identical, and $\\epsilon$ drops as the inverse square of the angle from the jet axis. The simplicity of the USJ model gives it a strong predictive power, including a specific prediction for the observed GRB distribution as a function of both the redshift $z$ and the viewing angle $\\theta$ from the jet axis. We show that the current sample of GRBs with known $z$ and estimated $\\theta$ does not agree with the predictions of the USJ model. This can be best seen for a relatively narrow range in $z$, in which the USJ model predicts that most GRBs should be near the upper end of the observed range in $\\theta$, while in the observed sample most GRBs are near the lower end of that range. Since ...

  10. Predicting functional brain ROIs via fiber shape models.

    Science.gov (United States)

    Zhang, Tuo; Guo, Lei; Li, Kaiming; Zhu, Dajing; Cui, Guangbin; Liu, Tianming

    2011-01-01

    Study of structural and functional connectivities of the human brain has received significant interest and effort recently. A fundamental question arises when attempting to measure the structural and/or functional connectivities of specific brain networks: how to best identify possible Regions of Interests (ROIs)? In this paper, we present a novel ROI prediction framework that localizes ROIs in individual brains based on learned fiber shape models from multimodal task-based fMRI and diffusion tensor imaging (DTI) data. In the training stage, ROIs are identified as activation peaks in task-based fMRI data. Then, shape models of white matter fibers emanating from these functional ROIs are learned. In addition, ROIs' location distribution model is learned to be used as an anatomical constraint. In the prediction stage, functional ROIs are predicted in individual brains based on DTI data. The ROI prediction is formulated and solved as an energy minimization problem, in which the two learned models are used as energy terms. Our experiment results show that the average ROI prediction error is 3.45 mm, in comparison with the benchmark data provided by working memory task-based fMRI. Promising results were also obtained on the ADNI-2 longitudinal DTI dataset.

  11. Land-ice modeling for sea-level prediction

    Energy Technology Data Exchange (ETDEWEB)

    Lipscomb, William H [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2010-06-11

    There has been major progress in ice sheet modeling since IPCC AR4. We will soon have efficient higherorder ice sheet models that can run at ",1 km resolution for entire ice sheets, either standalone or coupled to GeMs. These models should significantly reduce uncertainties in sea-level predictions. However, the least certain and potentially greatest contributions to 21st century sea-level rise may come from ice-ocean interactions, especially in West Antarctica. This is a coupled modeling problem that requires collaboration among ice, ocean and atmosphere modelers.

  12. Support vector regression model for complex target RCS predicting

    Institute of Scientific and Technical Information of China (English)

    Wang Gu; Chen Weishi; Miao Jungang

    2009-01-01

    The electromagnetic scattering computation has developed rapidly for many years; some computing problems for complex and coated targets cannot be solved by using the existing theory and computing models. A computing model based on data is established for making up the insufficiency of theoretic models. Based on the "support vector regression method", which is formulated on the principle of minimizing a structural risk, a data model to predicate the unknown radar cross section of some appointed targets is given. Comparison between the actual data and the results of this predicting model based on support vector regression method proved that the support vector regression method is workable and with a comparative precision.

  13. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  14. Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict global radiation

    CERN Document Server

    Voyant, Cyril; Paoli, Christophe; Nivet, Marie Laure

    2012-01-01

    We propose in this paper an original technique to predict global radiation using a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (ALADIN). We particularly look at the Multi-Layer Perceptron. After optimizing our architecture with ALADIN and endogenous data previously made stationary and using an innovative pre-input layer selection method, we combined it to an ARMA model from a rule based on the analysis of hourly data series. This model has been used to forecast the hourly global radiation for five places in Mediterranean area. Our technique outperforms classical models for all the places. The nRMSE for our hybrid model ANN/ARMA is 14.9% compared to 26.2% for the na\\"ive persistence predictor. Note that in the stand alone ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of the forecaster outputs, a complementary study concerning the confidence interval of each prediction is proposed

  15. Predicting human walking gaits with a simple planar model.

    Science.gov (United States)

    Martin, Anne E; Schmiedeler, James P

    2014-04-11

    Models of human walking with moderate complexity have the potential to accurately capture both joint kinematics and whole body energetics, thereby offering more simultaneous information than very simple models and less computational cost than very complex models. This work examines four- and six-link planar biped models with knees and rigid circular feet. The two differ in that the six-link model includes ankle joints. Stable periodic walking gaits are generated for both models using a hybrid zero dynamics-based control approach. To establish a baseline of how well the models can approximate normal human walking, gaits were optimized to match experimental human walking data, ranging in speed from very slow to very fast. The six-link model well matched the experimental step length, speed, and mean absolute power, while the four-link model did not, indicating that ankle work is a critical element in human walking models of this type. Beyond simply matching human data, the six-link model can be used in an optimization framework to predict normal human walking using a torque-squared objective function. The model well predicted experimental step length, joint motions, and mean absolute power over the full range of speeds.

  16. Towards predictive food process models: A protocol for parameter estimation.

    Science.gov (United States)

    Vilas, Carlos; Arias-Méndez, Ana; Garcia, Miriam R; Alonso, Antonio A; Balsa-Canto, E

    2016-05-31

    Mathematical models, in particular, physics-based models, are essential tools to food product and process design, optimization and control. The success of mathematical models relies on their predictive capabilities. However, describing physical, chemical and biological changes in food processing requires the values of some, typically unknown, parameters. Therefore, parameter estimation from experimental data is critical to achieving desired model predictive properties. This work takes a new look into the parameter estimation (or identification) problem in food process modeling. First, we examine common pitfalls such as lack of identifiability and multimodality. Second, we present the theoretical background of a parameter identification protocol intended to deal with those challenges. And, to finish, we illustrate the performance of the proposed protocol with an example related to the thermal processing of packaged foods.

  17. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...... preprocessing [Dau et al., 1997. J. Acoust. Soc. Am. 102, 2892-2905] with a simple central stage that describes the similarity of the test signal with the corresponding reference signal at a level of the internal representation of the signals. The model was compared with previous approaches, whereby a speech...... in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...

  18. Model predictive control for a thermostatic controlled system

    DEFF Research Database (Denmark)

    Shafiei, Seyed Ehsan; Rasmussen, Henrik; Stoustrup, Jakob

    2013-01-01

    This paper proposes a model predictive control scheme to provide temperature set-points to thermostatic controlled cooling units in refrigeration systems. The control problem is formulated as a convex programming problem to minimize the overall operating cost of the system. The foodstuff temperat......This paper proposes a model predictive control scheme to provide temperature set-points to thermostatic controlled cooling units in refrigeration systems. The control problem is formulated as a convex programming problem to minimize the overall operating cost of the system. The foodstuff...

  19. Physical/chemical modeling for photovoltaic module life prediction

    Science.gov (United States)

    Moacanin, J.; Carroll, W. F.; Gupta, A.

    1979-01-01

    The paper presents a generalized methodology for identification and evaluation of potential degradation and failure of terrestrial photovoltaic encapsulation. Failure progression modeling and an interaction matrix are utilized to complement the conventional approach to failure degradation mode identification. Comparison of the predicted performance based on these models can produce: (1) constraints on system or component design, materials or operating conditions, (2) qualification (predicted satisfactory function), and (3) uncertainty. The approach has been applied to an investigation of an unexpected delamination failure; it is being used to evaluate thermomechanical interactions in photovoltaic modules and to study corrosion of contacts and interconnects.

  20. A neural network model for olfactory glomerular activity prediction

    Science.gov (United States)

    Soh, Zu; Tsuji, Toshio; Takiguchi, Noboru; Ohtake, Hisao

    2012-12-01

    Recently, the importance of odors and methods for their evaluation have seen increased emphasis, especially in the fragrance and food industries. Although odors can be characterized by their odorant components, their chemical information cannot be directly related to the flavors we perceive. Biological research has revealed that neuronal activity related to glomeruli (which form part of the olfactory system) is closely connected to odor qualities. Here we report on a neural network model of the olfactory system that can predict glomerular activity from odorant molecule structures. We also report on the learning and prediction ability of the proposed model.

  1. Ensemble ecosystem modeling for predicting ecosystem response to predator reintroduction.

    Science.gov (United States)

    Baker, Christopher M; Gordon, Ascelin; Bode, Michael

    2017-04-01

    Introducing a new or extirpated species to an ecosystem is risky, and managers need quantitative methods that can predict the consequences for the recipient ecosystem. Proponents of keystone predator reintroductions commonly argue that the presence of the predator will restore ecosystem function, but this has not always been the case, and mathematical modeling has an important role to play in predicting how reintroductions will likely play out. We devised an ensemble modeling method that integrates species interaction networks and dynamic community simulations and used it to describe the range of plausible consequences of 2 keystone-predator reintroductions: wolves (Canis lupus) to Yellowstone National Park and dingoes (Canis dingo) to a national park in Australia. Although previous methods for predicting ecosystem responses to such interventions focused on predicting changes around a given equilibrium, we used Lotka-Volterra equations to predict changing abundances through time. We applied our method to interaction networks for wolves in Yellowstone National Park and for dingoes in Australia. Our model replicated the observed dynamics in Yellowstone National Park and produced a larger range of potential outcomes for the dingo network. However, we also found that changes in small vertebrates or invertebrates gave a good indication about the potential future state of the system. Our method allowed us to predict when the systems were far from equilibrium. Our results showed that the method can also be used to predict which species may increase or decrease following a reintroduction and can identify species that are important to monitor (i.e., species whose changes in abundance give extra insight into broad changes in the system). Ensemble ecosystem modeling can also be applied to assess the ecosystem-wide implications of other types of interventions including assisted migration, biocontrol, and invasive species eradication. © 2016 Society for Conservation Biology.

  2. Boolean network model predicts knockout mutant phenotypes of fission yeast.

    Directory of Open Access Journals (Sweden)

    Maria I Davidich

    Full Text Available BOOLEAN NETWORKS (OR: networks of switches are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus.

  3. Boolean Network Model Predicts Knockout Mutant Phenotypes of Fission Yeast

    Science.gov (United States)

    Davidich, Maria I.; Bornholdt, Stefan

    2013-01-01

    Boolean networks (or: networks of switches) are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus. PMID:24069138

  4. Lepton Flavor Violation in Predictive SUSY-GUT Models

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Carl H.; /Northern Illinois U. /Fermilab; Chen, Mu-Chun; /UC, Irvine

    2008-02-01

    There have been many theoretical models constructed which aim to explain the neutrino masses and mixing patterns. While many of the models will be eliminated once more accurate determinations of the mixing parameters, especially sin{sup 2} 2{theta}{sub 13}, are obtained, charged lepton flavor violation (LFV) experiments are able to differentiate even further among the models. In this paper, they investigate various rare LFV processes, such as {ell}{sub i} {yields} {ell}{sub j} + {gamma} and {mu} - e conversion, in five predictive SUSY SO(10) models and their allowed soft SUSY breaking parameter space in the constrained minimal SUSY standard model (CMSSM). Utilizing the WMAP dark matter constraints, they obtain lower bounds on the branching ratios of these rare processes and find that at least three of the five models they consider give rise to predictions for {mu} {yields} e + {gamma} that will be tested by the MEG collaboration at PSI. in addition, the next generation {mu} - e conversion experiment has sensitivity to the predictions of all five models, making it an even more robust way to test these models. While generic studies have emphasized the dependence of the branching ratios of these rare processes on the reactor neutrino angle, {theta}{sub 13}, and the mass of the heaviest right-handed neutrino, M{sub 3}, they find very massive M{sub 3} is more significant than large {theta}{sub 13} in leading to branching ratios near to the present upper limits.

  5. Evaluation of Spatial Agreement of Distinct Landslide Prediction Models

    Science.gov (United States)

    Sterlacchini, Simone; Bordogna, Gloria; Frigerio, Ivan

    2013-04-01

    The aim of the study was to assess the degree of spatial agreement of different predicted patterns in a majority of coherent landslide prediction maps with almost similar success and prediction rate curves. If two or more models have a similar performance, the choice of the best one is not a trivial operation and cannot be based on success and prediction rate curves only. In fact, it may happen that two or more prediction maps with similar accuracy and predictive power do not have the same degree of agreement in terms of spatial predicted patterns. The selected study area is the high Valtellina valley, in North of Italy, covering a surface of about 450 km2 where mapping of historical landslides is available. In order to assess landslide susceptibility, we applied the Weights of Evidence (WofE) modeling technique implemented by USGS by means of ARC-SDM tool. WofE efficiently investigate the spatial relationships among past events and multiple predisposing factors, providing useful information to identify the most probable location of future landslide occurrences. We have carried out 13 distinct experiments by changing the number of morphometric and geo-environmental explanatory variables in each experiment with the same training set and thus generating distinct models of landslide prediction, computing probability degrees of occurrence of landslides in each pixel. Expert knowledge and previous results from indirect statistically-based methods suggested slope, land use, and geology the best "driving controlling factors". The Success Rate Curve (SRC) was used to estimate how much the results of each model fit the occurrence of landslides used for the training of the models. The Prediction Rate Curve (PRC) was used to estimate how much the model predict the occurrence of landslides in the validation set. We found that the performances were very similar for different models. Also the dendrogram of the Cohen's kappa statistic and Principal Component Analysis (PCA) were

  6. Hybrid multiscale modeling and prediction of cancer cell behavior.

    Science.gov (United States)

    Zangooei, Mohammad Hossein; Habibi, Jafar

    2017-01-01

    Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.

  7. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  8. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    Science.gov (United States)

    Edeling, W. N.; Cinnella, P.; Dwight, R. P.

    2014-10-01

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  9. Predictive RANS simulations via Bayesian Model-Scenario Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Edeling, W.N., E-mail: W.N.Edeling@tudelft.nl [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Cinnella, P., E-mail: P.Cinnella@ensam.eu [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Dwight, R.P., E-mail: R.P.Dwight@tudelft.nl [Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands)

    2014-10-15

    The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.

  10. Neural Network Based Model for Predicting Housing Market Performance

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khalafallah

    2008-01-01

    The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.

  11. Neural Network Modeling to Predict Shelf Life of Greenhouse Lettuce

    Directory of Open Access Journals (Sweden)

    Wei-Chin Lin

    2009-04-01

    Full Text Available Greenhouse-grown butter lettuce (Lactuca sativa L. can potentially be stored for 21 days at constant 0°C. When storage temperature was increased to 5°C or 10°C, shelf life was shortened to 14 or 10 days, respectively, in our previous observations. Also, commercial shelf life of 7 to 10 days is common, due to postharvest temperature fluctuations. The objective of this study was to establish neural network (NN models to predict the remaining shelf life (RSL under fluctuating postharvest temperatures. A box of 12 - 24 lettuce heads constituted a sample unit. The end of the shelf life of each head was determined when it showed initial signs of decay or yellowing. Air temperatures inside a shipping box were recorded. Daily average temperatures in storage and averaged shelf life of each box were used as inputs, and the RSL was modeled as an output. An R2 of 0.57 could be observed when a simple NN structure was employed. Since the "future" (or remaining storage temperatures were unavailable at the time of making a prediction, a second NN model was introduced to accommodate a range of future temperatures and associated shelf lives. Using such 2-stage NN models, an R2 of 0.61 could be achieved for predicting RSL. This study indicated that NN modeling has potential for cold chain quality control and shelf life prediction.

  12. A COMPACT MODEL FOR PREDICTING ROAD TRAFFIC NOISE

    Directory of Open Access Journals (Sweden)

    R. Golmohammadi ، M. Abbaspour ، P. Nassiri ، H. Mahjub

    2009-07-01

    Full Text Available Noise is one of the most important sources of pollution in the metropolitan areas. The recognition of road traffic noise as one of the main sources of environmental pollution has led to develop models that enable us to predict noise level from fundamental variables. Traffic noise prediction models are required as aids in the design of roads and sometimes in the assessment of existing, or envisaged changes in, traffic noise conditions. The purpose of this study was to design a prediction road traffic noise model from traffic variables and conditions of transportation in Iran.This paper is the result of a research conducted in the city of Hamadan with the ultimate objective of setting up a traffic noise model based on the traffic conditions of Iranian cities. Noise levels and other variables have been measured in 282 samples to develop a statistical regression model based on A-weighted equivalent noise level for Iranian road condition. The results revealed that the average LAeq in all stations was 69.04± 4.25 dB(A, the average speed of vehicles was 44.57±11.46 km/h and average traffic load was 1231.9 ± 910.2 V/h.The developed model has seven explanatory entrance variables in order to achieve a high regression coefficient (R2=0.901. Comparing means of predicted and measuring equivalent sound pressure level (LAeq showed small difference less than -0.42 dB(A and -0.77 dB(A for Tehran and Hamadan cities, respectively. The suggested road traffic noise model can be effectively used as a decision support tool for predicting equivalent sound pressure level index in the cities of Iran.

  13. Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes

    Science.gov (United States)

    Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.

    2015-12-01

    Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.

  14. Predicting Category Intuitiveness with the Rational Model, the Simplicity Model, and the Generalized Context Model

    Science.gov (United States)

    Pothos, Emmanuel M.; Bailey, Todd M.

    2009-01-01

    Naive observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported…

  15. Predictive modeling of respiratory tumor motion for real-time prediction of baseline shifts

    Science.gov (United States)

    Balasubramanian, A.; Shamsuddin, R.; Prabhakaran, B.; Sawant, A.

    2017-03-01

    Baseline shifts in respiratory patterns can result in significant spatiotemporal changes in patient anatomy (compared to that captured during simulation), in turn, causing geometric and dosimetric errors in the administration of thoracic and abdominal radiotherapy. We propose predictive modeling of the tumor motion trajectories for predicting a baseline shift ahead of its occurrence. The key idea is to use the features of the tumor motion trajectory over a 1 min window, and predict the occurrence of a baseline shift in the 5 s that immediately follow (lookahead window). In this study, we explored a preliminary trend-based analysis with multi-class annotations as well as a more focused binary classification analysis. In both analyses, a number of different inter-fraction and intra-fraction training strategies were studied, both offline as well as online, along with data sufficiency and skew compensation for class imbalances. The performance of different training strategies were compared across multiple machine learning classification algorithms, including nearest neighbor, Naïve Bayes, linear discriminant and ensemble Adaboost. The prediction performance is evaluated using metrics such as accuracy, precision, recall and the area under the curve (AUC) for repeater operating characteristics curve. The key results of the trend-based analysis indicate that (i) intra-fraction training strategies achieve highest prediction accuracies (90.5-91.4%) (ii) the predictive modeling yields lowest accuracies (50-60%) when the training data does not include any information from the test patient; (iii) the prediction latencies are as low as a few hundred milliseconds, and thus conducive for real-time prediction. The binary classification performance is promising, indicated by high AUCs (0.96-0.98). It also confirms the utility of prior data from previous patients, and also the necessity of training the classifier on some initial data from the new patient for reasonable

  16. The development of U. S. soil erosion prediction and modeling

    Directory of Open Access Journals (Sweden)

    John M. Laflen

    2013-09-01

    Full Text Available Soil erosion prediction technology began over 70 years ago when Austin Zingg published a relationship between soil erosion (by water and land slope and length, followed shortly by a relationship by Dwight Smith that expanded this equation to include conservation practices. But, it was nearly 20 years before this work's expansion resulted in the Universal Soil Loss Equation (USLE, perhaps the foremost achievement in soil erosion prediction in the last century. The USLE has increased in application and complexity, and its usefulness and limitations have led to the development of additional technologies and new science in soil erosion research and prediction. Main among these new technologies is the Water Erosion Prediction Project (WEPP model, which has helped to overcome many of the shortcomings of the USLE, and increased the scale over which erosion by water can be predicted. Areas of application of erosion prediction include almost all land types: urban, rural, cropland, forests, rangeland, and construction sites. Specialty applications of WEPP include prediction of radioactive material movement with soils at a superfund cleanup site, and near real-time daily estimation of soil erosion for the entire state of Iowa.

  17. Oxidative insults to neurons and synapse are prevented by aged garlic extract and S-allyl-L-cysteine treatment in the neuronal culture and APP-Tg mouse model.

    Science.gov (United States)

    Ray, Balmiki; Chauhan, Neelima B; Lahiri, Debomoy K

    2011-05-01

    Alzheimer's disease (AD) is one of the most common forms of dementia in the elderly. In AD patients, β-amyloid peptide (Aβ) plaques and neurofibrillary tangles are common features observed in the CNS. Aβ deposition results in the production of reactive oxygen species (ROS) leading to the hyperphosphorylation of tau that are associated with neuronal damage. Cholinesterase inhibitors and a partial NMDA receptor antagonist (memantine) have been identified as potential treatment options for AD. However, clinical studies have found that these drugs fail to prevent the disease progression. From ancient times, garlic (Allium sativum) has been used to treat several diseases. By 'aging' of garlic, some adverse reactions of garlic can be eliminated. Recent findings suggest that 'aged garlic extract' (AGE) may be a therapeutic agent for AD because of its antioxidant and Aβ lowering properties. To date, the molecular properties of AGE have been sparsely studied in vitro or in vivo. The present study tested specific biochemical and molecular effects of AGE in neuronal and AD rodent models. Furthermore, we identified S-allyl-L-cysteine (SAC) as one of the most active chemicals responsible for the AGE-mediated effect(s). We observed significant neuroprotective and neurorescue properties of AGE and one of its ingredients, SAC, from ROS (H(2)O(2))-mediated insults to neuronal cells. Treatment of AGE and SAC were found to protect neuronal cells when they were independently co-treated with ROS. Furthermore, a novel neuropreservation effect of AGE was detected in that pre-treatment with AGE alone protected ∼ 80% neuronal cells from ROS-mediated damage. AGE was also found to preserve pre-synaptic protein synaptosomal associated protein of 25 kDa (SNAP25) from ROS-mediated insult. For example, treatment with 2% AGE containing diet and SAC (20 mg/kg of diet) independently increased (∼70%) levels of SNAP25 and synaptophysin in Alzheimer's amyloid precursor protein-transgenic mice

  18. Prediction of blast-induced air overpressure: a hybrid AI-based predictive model.

    Science.gov (United States)

    Jahed Armaghani, Danial; Hajihassani, Mohsen; Marto, Aminaton; Shirani Faradonbeh, Roohollah; Mohamad, Edy Tonnizam

    2015-11-01

    Blast operations in the vicinity of residential areas usually produce significant environmental problems which may cause severe damage to the nearby areas. Blast-induced air overpressure (AOp) is one of the most important environmental impacts of blast operations which needs to be predicted to minimize the potential risk of damage. This paper presents an artificial neural network (ANN) optimized by the imperialist competitive algorithm (ICA) for the prediction of AOp induced by quarry blasting. For this purpose, 95 blasting operations were precisely monitored in a granite quarry site in Malaysia and AOp values were recorded in each operation. Furthermore, the most influential parameters on AOp, including the maximum charge per delay and the distance between the blast-face and monitoring point, were measured and used to train the ICA-ANN model. Based on the generalized predictor equation and considering the measured data from the granite quarry site, a new empirical equation was developed to predict AOp. For comparison purposes, conventional ANN models were developed and compared with the ICA-ANN results. The results demonstrated that the proposed ICA-ANN model is able to predict blast-induced AOp more accurately than other presented techniques.

  19. Optimal model-free prediction from multivariate time series.

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  20. Risk models to predict hypertension: a systematic review.

    Directory of Open Access Journals (Sweden)

    Justin B Echouffo-Tcheugui

    Full Text Available BACKGROUND: As well as being a risk factor for cardiovascular disease, hypertension is also a health condition in its own right. Risk prediction models may be of value in identifying those individuals at risk of developing hypertension who are likely to benefit most from interventions. METHODS AND FINDINGS: To synthesize existing evidence on the performance of these models, we searched MEDLINE and EMBASE; examined bibliographies of retrieved articles; contacted experts in the field; and searched our own files. Dual review of identified studies was conducted. Included studies had to report on the development, validation, or impact analysis of a hypertension risk prediction model. For each publication, information was extracted on study design and characteristics, predictors, model discrimination, calibration and reclassification ability, validation and impact analysis. Eleven studies reporting on 15 different hypertension prediction risk models were identified. Age, sex, body mass index, diabetes status, and blood pressure variables were the most common predictor variables included in models. Most risk models had acceptable-to-good discriminatory ability (C-statistic>0.70 in the derivation sample. Calibration was less commonly assessed, but overall acceptable. Two hypertension risk models, the Framingham and Hopkins, have been externally validated, displaying acceptable-to-good discrimination, and C-statistic ranging from 0.71 to 0.81. Lack of individual-level data precluded analyses of the risk models in subgroups. CONCLUSIONS: The discrimination ability of existing hypertension risk prediction tools is acceptable, but the impact of using these tools on prescriptions and outcomes of hypertension prevention is unclear.

  1. TG13 miscellaneous etiology of cholangitis and cholecystitis.

    Science.gov (United States)

    Higuchi, Ryota; Takada, Tadahiro; Strasberg, Steven M; Pitt, Henry A; Gouma, Dirk J; Garden, O James; Büchler, Markus W; Windsor, John A; Mayumi, Toshihiko; Yoshida, Masahiro; Miura, Fumihiko; Kimura, Yasutoshi; Okamoto, Kohji; Gabata, Toshifumi; Hata, Jiro; Gomi, Harumi; Supe, Avinash N; Jagannath, Palepu; Singh, Harijt; Kim, Myung-Hwan; Hilvano, Serafin C; Ker, Chen-Guo; Kim, Sun-Whe

    2013-01-01

    This paper describes typical diseases and morbidities classified in the category of miscellaneous etiology of cholangitis and cholecystitis. The paper also comments on the evidence presented in the Tokyo Guidelines for the management of acute cholangitis and cholecystitis (TG 07) published in 2007 and the evidence reported subsequently, as well as miscellaneous etiology that has not so far been touched on. (1) Oriental cholangitis is the type of cholangitis that occurs following intrahepatic stones and is frequently referred to as an endemic disease in Southeast Asian regions. The characteristics and diagnosis of oriental cholangitis are also commented on. (2) TG 07 recommended percutaneous transhepatic biliary drainage in patients with cholestasis (many of the patients have obstructive jaundice or acute cholangitis and present clinical signs due to hilar biliary stenosis or obstruction). However, the usefulness of endoscopic naso-biliary drainage has increased along with the spread of endoscopic biliary drainage procedures. (3) As for biliary tract infections in patients who underwent biliary tract surgery, the incidence rate of cholangitis after reconstruction of the biliary tract and liver transplantation is presented. (4) As for primary sclerosing cholangitis, the frequency, age of predilection and the rate of combination of inflammatory enteropathy and biliary tract cancer are presented. (5) In the case of acalculous cholecystitis, the frequency of occurrence, causative factors and complications as well as the frequency of gangrenous cholecystitis, gallbladder perforation and diagnostic accuracy are included in the updated Tokyo Guidelines 2013 (TG13). Free full-text articles and a mobile application of TG13 are available via http://www.jshbps.jp/en/guideline/tg13.html.

  2. A prediction model for ocular damage - Experimental validation.

    Science.gov (United States)

    Heussner, Nico; Vagos, Márcia; Spitzer, Martin S; Stork, Wilhelm

    2015-08-01

    With the increasing number of laser applications in medicine and technology, accidental as well as intentional exposure of the human eye to laser sources has become a major concern. Therefore, a prediction model for ocular damage (PMOD) is presented within this work and validated for long-term exposure. This model is a combination of a raytracing model with a thermodynamical model of the human and an application which determines the thermal damage by the implementation of the Arrhenius integral. The model is based on our earlier work and is here validated against temperature measurements taken with porcine eye samples. For this validation, three different powers were used: 50mW, 100mW and 200mW with a spot size of 1.9mm. Also, the measurements were taken with two different sensing systems, an infrared camera and a fibre optic probe placed within the tissue. The temperatures were measured up to 60s and then compared against simulations. The measured temperatures were found to be in good agreement with the values predicted by the PMOD-model. To our best knowledge, this is the first model which is validated for both short-term and long-term irradiations in terms of temperature and thus demonstrates that temperatures can be accurately predicted within the thermal damage regime. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. MULTIVARIATE MODEL FOR CORPORATE BANKRUPTCY PREDICTION IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Daniel BRÎNDESCU – OLARIU

    2016-06-01

    Full Text Available The current paper proposes a methodology for bankruptcy prediction applicable for Romanian companies. Low bankruptcy frequencies registered in the past have limited the importance of bankruptcy prediction in Romania. The changes in the economic environment brought by the economic crisis, as well as by the entrance in the European Union, make the availability of performing bankruptcy assessment tools more important than ever before. The proposed methodology is centred on a multivariate model, developed through discriminant analysis. Financial ratios are employed as explanatory variables within the model. The study has included 53,252 yearly financial statements from the period 2007 – 2010, with the state of the companies being monitored until the end of 2012. It thus employs the largest sample ever used in Romanian research in the field of bankruptcy prediction, not targeting high levels of accuracy over isolated samples, but reliability and ease of use over the entire population.

  4. Predictive Model of Energy Consumption in Beer Production

    Directory of Open Access Journals (Sweden)

    Tiecheng Pu

    2013-07-01

    Full Text Available The predictive model of energy consumption is presented based on subtractive clustering and Adaptive-Network-Based Fuzzy Inference System (for short ANFIS in the beer production. Using the subtractive clustering on the historical data of energy consumption, the limit of artificial experience is conquered while confirming the number of fuzzy rules. The parameters of the fuzzy inference system are acquired by the structure of adaptive network and hybrid on-line learning algorithm. The method can predict and guide the energy consumption of the factual production process. The reducing consumption scheme is provided based on the actual situation of the enterprise. Finally, using concrete examples verified the feasibility of this method comparing with the Radial Basis Functions (for short RBF neural network predictive model.

  5. The Next Page Access Prediction Using Makov Model

    Directory of Open Access Journals (Sweden)

    Deepti Razdan

    2011-09-01

    Full Text Available Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentialdata decreases recommendation accuracy. In thispaper involve incorporating clustering with low order markovmodel which can improve the prediction accuracy. The main areaof research in this paper is pre processing and identification ofuseful patterns from web data using mining techniques with thehelp of open source software.

  6. Nonlinear turbulence models for predicting strong curvature effects

    Institute of Scientific and Technical Information of China (English)

    XU Jing-lei; MA Hui-yang; HUANG Yu-ning

    2008-01-01

    Prediction of the characteristics of turbulent flows with strong streamline curvature, such as flows in turbomachines, curved channel flows, flows around airfoils and buildings, is of great importance in engineering applicatious and poses a very practical challenge for turbulence modeling. In this paper, we analyze qualitatively the curvature effects on the structure of turbulence and conduct numerical simulations of a turbulent U- duct flow with a number of turbulence models in order to assess their overall performance. The models evaluated in this work are some typical linear eddy viscosity turbulence models, nonlinear eddy viscosity turbulence models (NLEVM) (quadratic and cubic), a quadratic explicit algebraic stress model (EASM) and a Reynolds stress model (RSM) developed based on the second-moment closure. Our numerical results show that a cubic NLEVM that performs considerably well in other benchmark turbulent flows, such as the Craft, Launder and Suga model and the Huang and Ma model, is able to capture the major features of the highly curved turbulent U-duct flow, including the damping of turbulence near the convex wall, the enhancement of turbulence near the concave wall, and the subsequent turbulent flow separation. The predictions of the cubic models are quite close to that of the RSM, in relatively good agreement with the experimental data, which suggests that these inodels may be employed to simulate the turbulent curved flows in engineering applications.

  7. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  8. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  9. Maxent modelling for predicting the potential distribution of Thai Palms

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach

    2011-01-01

    Increasingly species distribution models are being used to address questions related to ecology, biogeography and species conservation on global and regional scales. We used the maximum entropy approach implemented in the MAXENT programme to build a habitat suitability model for Thai palms based...... on presence data. The aim was to identify potential hot spot areas, assess the determinants of palm distribution ranges, and provide a firmer knowledge base for future conservation actions. We focused on a relatively small number of climatic, environmental and spatial variables in order to avoid...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...

  10. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis....... While use of experimentally measured values of the needed properties is desirable in these tasks, the experimental data of the properties of interest may not be available or may not be measurable in many cases. Therefore, property models that are reliable, predictive and easy to use are necessary....... However, which models should be used to provide the reliable estimates of the required properties? And, how much measured data is necessary to regress the model parameters? How to ensure predictive capabilities in the developed models? Also, as it is necessary to know the associated uncertainties...

  11. Predictions of titanium alloy properties using thermodynamic modeling tools

    Science.gov (United States)

    Zhang, F.; Xie, F.-Y.; Chen, S.-L.; Chang, Y. A.; Furrer, D.; Venkatesh, V.

    2005-12-01

    Thermodynamic modeling tools have become essential in understanding the effect of alloy chemistry on the final microstructure of a material. Implementation of such tools to improve titanium processing via parameter optimization has resulted in significant cost savings through the elimination of shop/laboratory trials and tests. In this study, a thermodynamic modeling tool developed at CompuTherm, LLC, is being used to predict β transus, phase proportions, phase chemistries, partitioning coefficients, and phase boundaries of multicomponent titanium alloys. This modeling tool includes Pandat, software for multicomponent phase equilibrium calculations, and PanTitanium, a thermodynamic database for titanium alloys. Model predictions are compared with experimental results for one α-β alloy (Ti-64) and two near-β alloys (Ti-17 and Ti-10-2-3). The alloying elements, especially the interstitial elements O, N, H, and C, have been shown to have a significant effect on the β transus temperature, and are discussed in more detail herein.

  12. Model for performance prediction in multi-axis machining

    CERN Document Server

    Lavernhe, Sylvain; Lartigue, Claire; 10.1007/s00170-007-1001-4

    2009-01-01

    This paper deals with a predictive model of kinematical performance in 5-axis milling within the context of High Speed Machining. Indeed, 5-axis high speed milling makes it possible to improve quality and productivity thanks to the degrees of freedom brought by the tool axis orientation. The tool axis orientation can be set efficiently in terms of productivity by considering kinematical constraints resulting from the set machine-tool/NC unit. Capacities of each axis as well as some NC unit functions can be expressed as limiting constraints. The proposed model relies on each axis displacement in the joint space of the machine-tool and predicts the most limiting axis for each trajectory segment. Thus, the calculation of the tool feedrate can be performed highlighting zones for which the programmed feedrate is not reached. This constitutes an indicator for trajectory optimization. The efficiency of the model is illustrated through examples. Finally, the model could be used for optimizing process planning.

  13. Probabilistic Modeling of Fatigue Damage Accumulation for Reliability Prediction

    Directory of Open Access Journals (Sweden)

    Vijay Rathod

    2011-01-01

    Full Text Available A methodology for probabilistic modeling of fatigue damage accumulation for single stress level and multistress level loading is proposed in this paper. The methodology uses linear damage accumulation model of Palmgren-Miner, a probabilistic S-N curve, and an approach for a one-to-one transformation of probability density functions to achieve the objective. The damage accumulation is modeled as a nonstationary process as both the expected damage accumulation and its variability change with time. The proposed methodology is then used for reliability prediction under single stress level and multistress level loading, utilizing dynamic statistical model of cumulative fatigue damage. The reliability prediction under both types of loading is demonstrated with examples.

  14. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Directory of Open Access Journals (Sweden)

    Isabel M D Rosa

    Full Text Available Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1 it is probabilistic and quantifies uncertainty around predictions and parameters; (2 the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3 deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia"-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy, annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is

  15. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  16. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Curtis, Gary P.; Lu, Dan; Ye, Ming

    2015-01-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  17. A prediction model for progressive disease in systemic sclerosis

    Science.gov (United States)

    Meijs, Jessica; Schouffoer, Anne A; Ajmone Marsan, Nina; Stijnen, Theo; Putter, Hein; Ninaber, Maarten K; Huizinga, Tom W J; de Vries-Bouwstra, Jeska K

    2015-01-01

    Objective To develop a model that assesses the risk for progressive disease in patients with systemic sclerosis (SSc) over the short term, in order to guide clinical management. Methods Baseline characteristics and 1 year follow-up results of 163 patients with SSc referred to a multidisciplinary healthcare programme were evaluated. Progressive disease was defined as: death, ≥10% decrease in forced vital capacity, ≥15% decrease in diffusing capacity for carbon monoxide, ≥10% decrease in body weight, ≥30% decrease in estimated-glomerular filtration rate, ≥30% increase in modified Rodnan Skin Score (with Δ≥5) or ≥0.25 increase in Scleroderma Health Assessment Questionnaire. The number of patients with progressive disease was determined. Univariable and multivariable logistic regression analyses were used to assess the probability of progressive disease for each individual patient. Performance of the prediction model was evaluated using a calibration plot and area under the receiver operating characteristic curve. Results 63 patients had progressive disease, including 8 patients who died ≤18 months after first evaluation. Multivariable analysis showed that friction rubs, proximal muscular weakness and decreased maximum oxygen uptake as % predicted, adjusted for age, gender and use of immunosuppressive therapy at baseline, were significantly associated with progressive disease. Using the prediction model, the predicted chance for progressive disease increased from a pretest chance of 37% to 67–89%. Conclusions Using the prediction model, the chance for progressive disease for individual patients could be doubled. Friction rubs, proximal muscular weakness and maximum oxygen uptake as % predicted were identified as relevant parameters. PMID:26688749

  18. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pMAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  19. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  20. Formalization of the model of the enterprise insolvency risk prediction

    Directory of Open Access Journals (Sweden)

    Elena V. Shirinkina

    2015-12-01

    Full Text Available Objective to improve the conceptual apparatus and analytical procedures of insolvency risk identification. Methods general scientific methods of systemic and comparative analysis economicstatistical and dynamic analysis of economic processes and phenomena. Results nowadays managing the insolvency risk is relevant for any company regardless of the economy sector. Instability manifests itself through the uncertainty of the directions of the external environment changes and their high frequency. Analysis of the economic literature showed that currently there is no single approach to systematization of methods for insolvency risk prediction which means that there is no objective view on tools that can be used to monitor the insolvency risk. In this respect scientific and practical search of representative indicators for the formalization of the models predicting the insolvency is very important. Therefore the study has solved the following tasks defined the nature of the insolvency risk and its identification in the process of financial relations in management system proved the representativeness of the indicators in the insolvency risk prediction and formed the model of the risk insolvency prediction. Scientific novelty grounding the model of risk insolvency prediction. Practical significance development of a theoretical framework to address issues arising in the diagnosis of insolvent enterprises and application of the results obtained in the practice of the bankruptcy institution bodies. The presented model allows to predict the insolvency risk of the enterprise through the general development trend and the fluctuation boundaries of bankruptcy risk to determine the significance of each indicatorfactor its quantitative impact and therefore to avoid the risk of the enterprise insolvency. nbsp