WorldWideScience

Sample records for model predicts density

  1. Computational modeling of oligonucleotide positional densities for human promoter prediction.

    Science.gov (United States)

    Narang, Vipin; Sung, Wing-Kin; Mittal, Ankush

    2005-01-01

    The gene promoter region controls transcriptional initiation of a gene, which is the most important step in gene regulation. In-silico detection of promoter region in genomic sequences has a number of applications in gene discovery and understanding gene expression regulation. However, computational prediction of eukaryotic poly-II promoters has remained a difficult task. This paper introduces a novel statistical technique for detecting promoter regions in long genomic sequences. A number of existing techniques analyze the occurrence frequencies of oligonucleotides in promoter sequences as compared to other genomic regions. In contrast, the present work studies the positional densities of oligonucleotides in promoter sequences. The analysis does not require any non-promoter sequence dataset or any model of the background oligonucleotide content of the genome. The statistical model learnt from a dataset of promoter sequences automatically recognizes a number of transcription factor binding sites simultaneously with their occurrence positions relative to the transcription start site. Based on this model, a continuous naïve Bayes classifier is developed for the detection of human promoters and transcription start sites in genomic sequences. The present study extends the scope of statistical models in general promoter modeling and prediction. Promoter sequence features learnt by the model correlate well with known biological facts. Results of human transcription start site prediction compare favorably with existing 2nd generation promoter prediction tools.

  2. Modelling and predicting the spatial distribution of tree root density in heterogeneous forest ecosystems.

    Science.gov (United States)

    Mao, Zhun; Saint-André, Laurent; Bourrier, Franck; Stokes, Alexia; Cordonnier, Thomas

    2015-08-01

    In mountain ecosystems, predicting root density in three dimensions (3-D) is highly challenging due to the spatial heterogeneity of forest communities. This study presents a simple and semi-mechanistic model, named ChaMRoots, that predicts root interception density (RID, number of roots m(-2)). ChaMRoots hypothesizes that RID at a given point is affected by the presence of roots from surrounding trees forming a polygon shape. The model comprises three sub-models for predicting: (1) the spatial heterogeneity - RID of the finest roots in the top soil layer as a function of tree basal area at breast height, and the distance between the tree and a given point; (2) the diameter spectrum - the distribution of RID as a function of root diameter up to 50 mm thick; and (3) the vertical profile - the distribution of RID as a function of soil depth. The RID data used for fitting in the model were measured in two uneven-aged mountain forest ecosystems in the French Alps. These sites differ in tree density and species composition. In general, the validation of each sub-model indicated that all sub-models of ChaMRoots had good fits. The model achieved a highly satisfactory compromise between the number of aerial input parameters and the fit to the observed data. The semi-mechanistic ChaMRoots model focuses on the spatial distribution of root density at the tree cluster scale, in contrast to the majority of published root models, which function at the level of the individual. Based on easy-to-measure characteristics, simple forest inventory protocols and three sub-models, it achieves a good compromise between the complexity of the case study area and that of the global model structure. ChaMRoots can be easily coupled with spatially explicit individual-based forest dynamics models and thus provides a highly transferable approach for modelling 3-D root spatial distribution in complex forest ecosystems. © The Author 2015. Published by Oxford University Press on behalf of the

  3. Level density of the sd-nuclei-Statistical shell-model predictions

    Science.gov (United States)

    Karampagia, S.; Senkov, R. A.; Zelevinsky, V.

    2018-03-01

    Accurate knowledge of the nuclear level density is important both from a theoretical viewpoint as a powerful instrument for studying nuclear structure and for numerous applications. For example, astrophysical reactions responsible for the nucleosynthesis in the universe can be understood only if we know the nuclear level density. We use the configuration-interaction nuclear shell model to predict nuclear level density for all nuclei in the sd-shell, both total and for individual spins (only with positive parity). To avoid the diagonalization in large model spaces we use the moments method based on statistical properties of nuclear many-body systems. In the cases where the diagonalization is possible, the results of the moments method practically coincide with those from the shell-model calculations. Using the computed level densities, we fit the parameters of the Constant Temperature phenomenological model, which can be used by practitioners in their studies of nuclear reactions at excitation energies appropriate for the sd-shell nuclei.

  4. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    Science.gov (United States)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  5. Evaluating the effect of Tikhonov regularization schemes on predictions in a variable‐density groundwater model

    Science.gov (United States)

    White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.

    2010-01-01

    Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.

  6. A predictive model of rats' calorie intake as a function of diet energy density.

    Science.gov (United States)

    Beheshti, Rahmatollah; Treesukosol, Yada; Igusa, Takeru; Moran, Timothy H

    2018-01-17

    Easy access to high-energy food has been linked to high rates of obesity in the world. Understanding the way that access to palatable (high fat or high calorie) food can lead to overconsumption is essential for both preventing and treating obesity. Although the body of studies focused on the effects of high energy diets is growing, our understanding of how different factors contribute to food choices is not complete. In this study, we present a mathematical model that can predict rats' calorie intake to a high-energy diet based on their ingestive behavior to a standard chow diet. Specifically, we propose an equation that describes the relation between the body weight (W), energy density ( E), time elapsed from the start of diet ( T), and daily calorie intake ( C). We tested our model on two independent data sets. Our results show that the suggested model can predict the calorie intake patterns with high accuracy. Additionally, the only free parameter of our proposed equation ( ρ), which is unique to each animal, has a strong association with their calorie intake.

  7. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    Directory of Open Access Journals (Sweden)

    R. Liu

    2010-09-01

    Full Text Available With the help of four years (2002–2005 of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmin<−100 nT are chosen for a statistical study. In order to achieve a good correlation Em is preconditioned. Contrary to general opinion, Em has to be applied without saturation effect in order to obtain good results for magnetic storms of all activity levels. The memory effect of the thermosphere is accounted for by a weighted integration of Em over the past 3 h. In addition, a lag time of the mass density response to solar wind input of 0 to 4.5 h depending on latitude and local time is considered. A linear model using the preconditioned Em as main controlling parameter for predicting mass density changes during magnetic storms is developed: ρ=0.5 Em + ρamb, where ρamb is based on the mean density during the quiet day before the storm. We show that this simple relation predicts all storm-induced mass density variations at CHAMP altitude fairly well especially if orbital averages are considered.

  8. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    Science.gov (United States)

    Liu, R.; Lühr, H.; Doornbos, E.; Ma, S.-Y.

    2010-09-01

    With the help of four years (2002-2005) of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmineffect in order to obtain good results for magnetic storms of all activity levels. The memory effect of the thermosphere is accounted for by a weighted integration of Em over the past 3 h. In addition, a lag time of the mass density response to solar wind input of 0 to 4.5 h depending on latitude and local time is considered. A linear model using the preconditioned color: #000;">Em as main controlling parameter for predicting mass density changes during magnetic storms is developed: ρ=0.5 color: #000;">Em + ρamb, where ρamb is based on the mean density during the quiet day before the storm. We show that this simple relation predicts all storm-induced mass density variations at CHAMP altitude fairly well especially if orbital averages are considered.

  9. Bone fragility beyond strength and mineral density: Raman spectroscopy predicts femoral fracture toughness in a murine model of rheumatoid arthritis.

    Science.gov (United States)

    Inzana, Jason A; Maher, Jason R; Takahata, Masahiko; Schwarz, Edward M; Berger, Andrew J; Awad, Hani A

    2013-02-22

    Clinical prediction of bone fracture risk primarily relies on measures of bone mineral density (BMD). BMD is strongly correlated with bone strength, but strength is independent of fracture toughness, which refers to the bone's resistance to crack initiation and propagation. In that sense, fracture toughness is more relevant to assessing fragility-related fracture risk, independent of trauma. We hypothesized that bone biochemistry, determined by Raman spectroscopy, predicts bone fracture toughness better than BMD. This hypothesis was tested in tumor necrosis factor-transgenic mice (TNF-tg), which develop inflammatory-erosive arthritis and osteoporosis. The left femurs of TNF-tg and wild type (WT) littermates were measured with Raman spectroscopy and micro-computed tomography. Fracture toughness was assessed by cutting a sharp notch into the anterior surface of the femoral mid-diaphysis and propagating the crack under 3 point bending. Femoral fracture toughness of TNF-tg mice was significantly reduced compared to WT controls (p=0.04). A Raman spectrum-based prediction model of fracture toughness was generated by partial least squares regression (PLSR). Raman spectrum PLSR analysis produced strong predictions of fracture toughness, while BMD was not significantly correlated and produced very weak predictions. Raman spectral components associated with mineralization quality and bone collagen were strongly leveraged in predicting fracture toughness, reiterating the limitations of mineralization density alone. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  11. A Phenomenological Model to Predict the Density and Distribution of Pacific Hake by Season and Geography

    National Research Council Canada - National Science Library

    Nero, Redwood

    2000-01-01

    .... Oceanographic and bathymetric data at a spatial resolution of 10 km are used as a geographic framework in which the migration is placed, allowing the formation of raster images of fish density...

  12. Density prediction and dimensionality reduction of mid-term electricity demand in China: A new semiparametric-based additive model

    International Nuclear Information System (INIS)

    Shao, Zhen; Yang, Shan-Lin; Gao, Fei

    2014-01-01

    Highlights: • A new stationary time series smoothing-based semiparametric model is established. • A novel semiparametric additive model based on piecewise smooth is proposed. • We model the uncertainty of data distribution for mid-term electricity forecasting. • We provide efficient long horizon simulation and extraction for external variables. • We provide stable and accurate density predictions for mid-term electricity demand. - Abstract: Accurate mid-term electricity demand forecasting is critical for efficient electric planning, budgeting and operating decisions. Mid-term electricity demand forecasting is notoriously complicated, since the demand is subject to a range of external drivers, such as climate change, economic development, which will exhibit monthly, seasonal, and annual complex variations. Conventional models are based on the assumption that original data is stable and normally distributed, which is generally insignificant in explaining actual demand pattern. This paper proposes a new semiparametric additive model that, in addition to considering the uncertainty of the data distribution, includes practical discussions covering the applications of the external variables. To effectively detach the multi-dimensional volatility of mid-term demand, a novel piecewise smooth method which allows reduction of the data dimensionality is developed. Besides, a semi-parametric procedure that makes use of bootstrap algorithm for density forecast and model estimation is presented. Two typical cases in China are presented to verify the effectiveness of the proposed methodology. The results suggest that both meteorological and economic variables play a critical role in mid-term electricity consumption prediction in China, while the extracted economic factor is adequate to reveal the potentially complex relationship between electricity consumption and economic fluctuation. Overall, the proposed model can be easily applied to mid-term demand forecasting, and

  13. Predictions of Taylor's power law, density dependence and pink noise from a neutrally modeled time series

    Czech Academy of Sciences Publication Activity Database

    Keil, P.; Herben, Tomáš; Rosindell, J.; Storch, D.

    2010-01-01

    Roč. 265, č. 1 (2010), s. 68-86 ISSN 0022-5193 R&D Projects: GA MŠk LC06073 Institutional research plan: CEZ:AV0Z60050516 Keywords : Taylor´s power law * density dependence * pink noise Subject RIV: EF - Botanics Impact factor: 2.371, year: 2010

  14. Model comparison on genomic predictions using high-density markers for different groups of bulls in the Nordic Holstein population.

    Science.gov (United States)

    Gao, H; Su, G; Janss, L; Zhang, Y; Lund, M S

    2013-07-01

    This study compared genomic predictions based on imputed high-density markers (~777,000) in the Nordic Holstein population using a genomic BLUP (GBLUP) model, 4 Bayesian exponential power models with different shape parameters (0.3, 0.5, 0.8, and 1.0) for the exponential power distribution, and a Bayesian mixture model (a mixture of 4 normal distributions). Direct genomic values (DGV) were estimated for milk yield, fat yield, protein yield, fertility, and mastitis, using deregressed proofs (DRP) as response variable. The validation animals were split into 4 groups according to their genetic relationship with the training population. Groupsmgs had both the sire and the maternal grandsire (MGS), Groupsire only had the sire, Groupmgs only had the MGS, and Groupnon had neither the sire nor the MGS in the training population. Reliability of DGV was measured as the squared correlation between DGV and DRP divided by the reliability of DRP for the bulls in validation data set. Unbiasedness of DGV was measured as the regression of DRP on DGV. The results indicated that DGV were more accurate and less biased for animals that were more related to the training population. In general, the Bayesian mixture model and the exponential power model with shape parameter of 0.30 led to higher reliability of DGV than did the other models. The differences between reliabilities of DGV from the Bayesian models and the GBLUP model were statistically significant for some traits. We observed a tendency that the superiority of the Bayesian models over the GBLUP model was more profound for the groups having weaker relationships with training population. Averaged over the 5 traits, the Bayesian mixture model improved the reliability of DGV by 2.0 percentage points for Groupsmgs, 2.7 percentage points for Groupsire, 3.3 percentage points for Groupmgs, and 4.3 percentage points for Groupnon compared with GBLUP. The results showed that a Bayesian model with intense shrinkage of the explanatory

  15. Model comparison on genomic predictions using high-density markers for different groups of bulls in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Su, Guosheng; Janss, Luc

    2013-01-01

    This study compared genomic predictions based on imputed high-density markers (~777,000) in the Nordic Holstein population using a genomic BLUP (GBLUP) model, 4 Bayesian exponential power models with different shape parameters (0.3, 0.5, 0.8, and 1.0) for the exponential power distribution...... relationship with the training population. Groupsmgs had both the sire and the maternal grandsire (MGS), Groupsire only had the sire, Groupmgs only had the MGS, and Groupnon had neither the sire nor the MGS in the training population. Reliability of DGV was measured as the squared correlation between DGV...... and DRP divided by the reliability of DRP for the bulls in validation data set. Unbiasedness of DGV was measured as the regression of DRP on DGV. The results indicated that DGV were more accurate and less biased for animals that were more related to the training population. In general, the Bayesian...

  16. Precision prediction for the cosmological density distribution

    Science.gov (United States)

    Repp, Andrew; Szapudi, István

    2018-01-01

    The distribution of matter in the Universe is approximately lognormal, and one can improve this approximation by characterizing the third moment (skewness) of the log density field. Thus, using Millennium Simulation phenomenology and building on previous work, we present analytic fits for the mean, variance and skewness of the log density field A, allowing prediction of these moments given a set of cosmological parameter values. We further show that a generalized extreme value (GEV) distribution accurately models A; we submit that this GEV behaviour is the result of strong intrapixel correlations, without which the smoothed distribution would tend towards a Gaussian (by the central limit theorem). Our GEV model (with the predicted values of the first three moments) yields cumulative distribution functions accurate to within 1.7 per cent for near-concordance cosmologies, over a range of redshifts and smoothing scales.

  17. Population Density Modeling Tool

    Science.gov (United States)

    2012-06-26

    194 POPULATION DENSITY MODELING TOOL by Davy Andrew Michael Knott David Burke 26 June 2012 Distribution...MARYLAND NAWCADPAX/TR-2012/194 26 June 2012 POPULATION DENSITY MODELING TOOL by Davy Andrew Michael Knott David Burke...information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE 26

  18. Predicting moisture content and density distribution of Scots pine by microwave scanning of sawn timber II: Evaluation of models generated on a pixel level

    International Nuclear Information System (INIS)

    Lundgren, N.; Hagman, O.; Johansson, J.

    2006-01-01

    The purpose of this study was to use images from a microwave sensor on a pixel level for simultaneous prediction of moisture content and density of wood. The microwave sensor functions as a line-scan camera with a pixel size of 8mm. Boards of Scots pine (Pinus sylvestris), 25 and 50mm thick, were scanned at three different moisture contents. Dry density and moisture content for each pixel were calculated from measurements with a computed tomography scanner. It was possible to create models for prediction of density on a pixel level. Models for prediction of moisture content had to be based on average values over homogeneous regions. Accuracy will be improved if it is possible to make a classification of knots, heartwood, sapwood, etc., and calibrate different models for different types of wood. The limitations of the sensor used are high noise in amplitude measurements and the restriction to one period for phase measurements

  19. Using dynamic energy budget modeling to predict the influence of temperature and food density on the effect of Cu on earthworm mediated litter consumption.

    NARCIS (Netherlands)

    Hobbelen, P.H.F.; van Gestel, C.A.M.

    2007-01-01

    The aim of this study was to predict the dependence on temperature and food density of effects of Cu on the litter consumption by the earthworm Lumbricus rubellus, using a dynamic energy budget model (DEB-model). As a measure of the effects of Cu on food consumption, EC50s (soil concentrations

  20. Adsorption of CH4 on nitrogen- and boron-containing carbon models of coal predicted by density-functional theory

    Science.gov (United States)

    Liu, Xiao-Qiang; Xue, Ying; Tian, Zhi-Yue; Mo, Jing-Jing; Qiu, Nian-Xiang; Chu, Wei; Xie, He-Ping

    2013-11-01

    Graphene doped by nitrogen (N) and/or boron (B) is used to represent the surface models of coal with the structural heterogeneity. Through the density functional theory (DFT) calculations, the interactions between coalbed methane (CBM) and coal surfaces have been investigated. Several adsorption sites and orientations of methane (CH4) on graphenes were systematically considered. Our calculations predicted adsorption energies of CH4 on graphenes of up to -0.179 eV, with the strongest binding mode in which three hydrogen atoms of CH4 direct to graphene surface, observed for N-doped graphene, compared to the perfect (-0.154 eV), B-doped (-0.150 eV), and NB-doped graphenes (-0.170 eV). Doping N in graphene increases the adsorption energies of CH4, but slightly reduced binding is found when graphene is doped by B. Our results indicate that all of graphenes act as the role of a weak electron acceptor with respect to CH4. The interactions between CH4 and graphenes are the physical adsorption and slightly depend upon the adsorption sites on graphenes and the orientations of methane as well as the electronegativity of dopant atoms in graphene.

  1. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Finite element model predicts current density distribution for clinical applications of tDCS and tACS

    Directory of Open Access Journals (Sweden)

    Toralf eNeuling

    2012-09-01

    Full Text Available Transcranial direct current stimulation (tDCS has been applied in numerous scientific studies over the past decade. However, the possibility to apply tDCS in therapy of neuropsychiatric disorders is still debated. While transcranial magnetic stimulation (TMS has been approved for treatment of major depression in the United States by the Food and Drug Administration (FDA, tDCS is not as widely accepted. One of the criticisms against tDCS is the lack of spatial specificity. Focality is limited by the electrode size (35 cm2 are commonly used and the bipolar arrangement. However, a current flow through the head directly from anode to cathode is an outdated view. Finite element (FE models have recently been used to predict the exact current flow during tDCS. These simulations have demonstrated that the current flow depends on tissue shape and conductivity. Toface the challenge to predict the location, magnitude and direction of the current flow induced by tDCS and transcranial alternating current stimulation (tACS, we used a refined realistic FE modeling approach. With respect to the literature on clinical tDCS and tACS, we analyzed two common setups for the location of the stimulation electrodes which target the frontal lobe and the occipital lobe, respectively. We compared lateral and medial electrode configuration with regard to theirusability. We were able to demonstrate that the lateral configurations yielded more focused stimulation areas as well as higher current intensities in the target areas. The high resolution of our simulation allows one to combine the modeled current flow with the knowledge of neuronal orientation to predict the consequences of tDCS and tACS. Our results not only offer a basis for a deeper understanding of the stimulation sites currently in use for clinical applications but also offer a better interpretation of observed effects.

  3. Prediction Models for Density and Viscosity of Biodiesel and their Effects on Fuel Supply System in CI Engines

    OpenAIRE

    Tesfa, Belachew; Mishra, Rakesh; Gu, Fengshou; Powles, Nicholas

    2010-01-01

    Biodiesel is a promising non-toxic and biodegradable alternative fuel used in the transport sector. Nevertheless, the higher viscosity and density of biodiesel poses some acute problems when it is used it in unmodified engine. Taking this into consideration, this study has been focused towards two objectives. The first objective is to identify the effect of temperature on density and viscosity for a variety of biodiesels and also to develop a correlation between density and viscosity for thes...

  4. Localized Density/Drag Prediction for Improved Onboard Orbit Propagation

    Science.gov (United States)

    Stastny, N.; Lin, C.; Lovell, A.; Luck, J.; Chavez, F.

    Since the development of Luigi G. Jacchia's first density model in 1970 (J70), atmospheric density modeling has steadily focused on large monolithic codes that provide global density coverage. The most recent instantiation of the global density model is the Jacchia-Bowman 2008 (JB08) model developed by Bruce Bowman of the Air Force Space Command. As the models have evolved and improved, their complexity has grown as well. Where the J70 model required 2 indices and various time averages to determine density, the JB08 model requires 5 indices to determine density. Due to computational complexity, the number of real-time inputs required, and limited forecasting abilities, these models are not well suited for onboard satellite orbit propagation. In contrast to the global models, this paper proposes the development of a density prediction tool that is only concerned with the trajectory of a specific satellite. Since the orbital parameters of most low Earth orbiting satellites remain relatively constant in the short term, there is also minimal variation in the density profile observed by the satellite. Limiting the density model to a smaller orbit regime will also increase the ability to forecast the density along that orbital track. As a first step, this paper evaluates the feasibility of using a localized density prediction algorithm to generate the density profile that will be seen by satellite, allowing for high-accuracy orbit propagation with minimal or no input from the ground. The algorithm evaluated in this paper is a simple Yule-Walker auto-regressive filter that, given previously measured density values, provides predictions on the upcoming density profile. This first approach requires zero information about the satellite's current orbit, but does require an onboard method for determining the current, local density. Though this aspect of the onboard system is not analyzed here, it is envisioned that this current, local density (or equivalently drag acceleration

  5. Towards predicting wading bird densities from predicted prey densities in a post-barrage Severn estuary

    International Nuclear Information System (INIS)

    Goss-Custard, J.D.; McGrorty, S.; Clarke, R.T.; Pearson, B.; Rispin, W.E.; Durell, S.E.A. le V. dit; Rose, R.J.; Warwick, R.M.; Kirby, R.

    1991-01-01

    A winter survey of seven species of wading birds in six estuaries in south-west England was made to develop a method for predicting bird densities should a tidal power barrage be built on the Severn estuary. Within most estuaries, bird densities correlated with the densities of widely taken prey species. A barrage would substantially reduce the area of intertidal flats available at low water for the birds to feed but the invertebrate density could increase in the generally more benign post-barrage environmental conditions. Wader densities would have to increase approximately twofold to allow the same overall numbers of birds to remain post-barrage as occur on the Severn at present. Provisional estimates are given of the increases in prey density required to allow bird densities to increase by this amount. With the exception of the prey of dunlin, these fall well within the ranges of densities found in other estuaries, and so could in principle be attained in the post-barrage Severn. An attempt was made to derive equations with which to predict post-barrage densities of invertebrates from easily measured, static environmental variables. The fact that a site was in the Severn had a significant additional effect on invertebrate density in seven cases. This suggests that there is a special feature of the Severn, probably one associated with its highly dynamic nature. This factor must be identified if the post-barrage densities of invertebrates are to be successful predicted. (author)

  6. Prediction of bending moment resistance of screw connected joints in plywood members using regression models and compare with that commercial medium density fiberboard (MDF and particleboard

    Directory of Open Access Journals (Sweden)

    Sadegh Maleki

    2014-11-01

    Full Text Available The study aimed at predicting bending moment resistance plywood of screw (coarse and fine threads joints using regression models. Thickness of the member was 19mm and compared with medium density fiberboard (MDF and particleboard with 18mm thicknesses. Two types of screws including coarse and fine thread drywall screw with nominal diameters of 6, 8 and 10mm and 3.5, 4 and 5 cm length respectively and sheet metal screw with diameters of 8 and 10 and length of 4 cm were used. The results of the study have shown that bending moment resistance of screw was increased by increasing of screws diameter and penetrating depth. Screw Length was found to have a larger influence on bending moment resistance than screw diameter. Bending moment resistance with coarse thread drywall screws was higher than those of fine thread drywall screws. The highest bending moment resistance (71.76 N.m was observed in joints made with coarse screw which were 5 mm in diameter and 28 mm in depth of penetration. The lowest bending moment resistance (12.08 N.m was observed in joints having fine screw with 3.5 mm diameter and 9 mm penetrations. Furthermore, bending moment resistance in plywood was higher than those of medium density fiberboard (MDF and particleboard. Finally, it has been found that the ultimate bending moment resistance of plywood joint can be predicted following formula Wc = 0.189×D0.726×P0.577 for coarse thread drywall screws and Wf = 0.086×D0.942×P0.704 for fine ones according to diameter and penetrating depth. The analysis of variance of the experimental and predicted data showed that the developed models provide a fair approximation of actual experimental measurements.

  7. Tigers and their prey: Predicting carnivore densities from prey abundance

    Science.gov (United States)

    Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Link, W.A.; Hines, J.E.

    2004-01-01

    The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.

  8. Tigers and their prey: Predicting carnivore densities from prey abundance.

    Science.gov (United States)

    Karanth, K Ullas; Nichols, James D; Kumar, N Samba; Link, William A; Hines, James E

    2004-04-06

    The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.

  9. Phalangeal bone mineral density predicts incident fractures

    DEFF Research Database (Denmark)

    Friis-Holmberg, Teresa; Brixen, Kim; Rubin, Katrine Hass

    2012-01-01

    This prospective study investigates the use of phalangeal bone mineral density (BMD) in predicting fractures in a cohort (15,542) who underwent a BMD scan. In both women and men, a decrease in BMD was associated with an increased risk of fracture when adjusted for age and prevalent fractures....... PURPOSE: The aim of this study was to evaluate the ability of a compact and portable scanner using radiographic absorptiometry (RA) to predict major osteoporotic fractures. METHODS: This prospective study included a cohort of 15,542 men and women aged 18-95 years, who underwent a BMD scan in Danish Health...... Examination Survey 2007-2008. BMD at the middle phalanges of the second, third and fourth digits of the non-dominant hand was measured using RA (Alara MetriScan®). These data were merged with information on incident fractures retrieved from the Danish National Patient Registry comprising the International...

  10. FleaTickRisk: a meteorological model developed to monitor and predict the activity and density of three tick species and the cat flea in Europe

    Directory of Open Access Journals (Sweden)

    Frédéric Beugnet

    2009-11-01

    Full Text Available Mathematical modelling is quite a recent tool in epidemiology. Geographical information system (GIS combined with remote sensing (data collection and analysis provide valuable models, but the integration of climatologic models in parasitology and epidemiology is less common. The aim of our model, called “FleaTickRisk”, was to use meteorological data and forecasts to monitor the activity and density of some arthropods. Our parasitological model uses the Weather Research and Forecasting (WRF meteorological model integrating biological parameters. The WRF model provides a temperature and humidity picture four times a day (at 6:00, 12:00, 18:00 and 24:00 hours. Its geographical resolution is 27 x 27 km over Europe (area between longitudes 10.5° W and 30° E and latitudes 37.75° N and 62° N. The model also provides weekly forecasts. Past data were compared and revalidated using current meteorological data generated by ground stations and weather satellites. The WRF model also includes geographical information stemming from United States Geophysical Survey biotope maps with a 30’’ spatial resolution (approximately 900 x 900 m. WRF takes into account specific climatic conditions due to valleys, altitudes, lakes and wind specificities. The biological parameters of Ixodes ricinus, Dermacentor reticulatus, Rhipicephalus sanguineus and Ctenocephalides felis felis were transformed into a matrix of activity. This activity matrix is expressed as a percentage, ranging from 0 to 100, for each interval of temperature x humidity. The activity of these arthropods is defined by their ability to infest hosts, take blood meals and reproduce. For each arthropod, the matrix was calculated using existing data collected under optimal temperature and humidity conditions, as well as the timing of the life cycle. The mathematical model integrating both the WRF model (meteorological data + geographical data and the biological matrix provides two indexes: an

  11. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Science.gov (United States)

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  12. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  13. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  14. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  15. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  16. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  17. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  18. Escherichia coli bacteria density in relation to turbidity, streamflow characteristics, and season in the Chattahoochee River near Atlanta, Georgia, October 2000 through September 2008—Description, statistical analysis, and predictive modeling

    Science.gov (United States)

    Lawrence, Stephen J.

    2012-01-01

    Water-based recreation—such as rafting, canoeing, and fishing—is popular among visitors to the Chattahoochee River National Recreation Area (CRNRA) in north Georgia. The CRNRA is a 48-mile reach of the Chattahoochee River upstream from Atlanta, Georgia, managed by the National Park Service (NPS). Historically, high densities of fecal-indicator bacteria have been documented in the Chattahoochee River and its tributaries at levels that commonly exceeded Georgia water-quality standards. In October 2000, the NPS partnered with the U.S. Geological Survey (USGS), State and local agencies, and non-governmental organizations to monitor Escherichia coli bacteria (E. coli) density and develop a system to alert river users when E. coli densities exceeded the U.S. Environmental Protection Agency (USEPA) single-sample beach criterion of 235 colonies (most probable number) per 100 milliliters (MPN/100 mL) of water. This program, called BacteriALERT, monitors E. coli density, turbidity, and water temperature at two sites on the Chattahoochee River upstream from Atlanta, Georgia. This report summarizes E. coli bacteria density and turbidity values in water samples collected between 2000 and 2008 as part of the BacteriALERT program; describes the relations between E. coli density and turbidity, streamflow characteristics, and season; and describes the regression analyses used to develop predictive models that estimate E. coli density in real time at both sampling sites.

  19. Prediction of maximum dry density of local granular fills | Worku ...

    African Journals Online (AJOL)

    The paper presents a relation. developed to predict maximum dry density (MDD) in terms of the solid density and the gradation coefficients that characterize the grain size distribution of locally employed granular fill materials. For this purpose, two geologically different soils commonly used as selected fill materials are ...

  20. PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...

    African Journals Online (AJOL)

    Department of Civil Engineering. Addis Ababa University. ABST~CT. The paper presents a relation. developed to predict maximum dry density (MDD) in terms of the solid density and the gradation coefficients that characterize the grain size distribution of locally employed granular fill materials. For this purpose,.

  1. Excess seawater nutrients, enlarged algal symbiont densities and bleaching sensitive reef locations: 2. A regional-scale predictive model for the Great Barrier Reef, Australia.

    Science.gov (United States)

    Wooldridge, Scott A; Heron, Scott F; Brodie, Jon E; Done, Terence J; Masiri, Itsara; Hinrichs, Saskia

    2017-01-15

    A spatial risk assessment model is developed for the Great Barrier Reef (GBR, Australia) that helps identify reef locations at higher or lower risk of coral bleaching in summer heat-wave conditions. The model confirms the considerable benefit of discriminating nutrient-enriched areas that contain corals with enlarged (suboptimal) symbiont densities for the purpose of identifying bleaching-sensitive reef locations. The benefit of the new system-level understanding is showcased in terms of: (i) improving early-warning forecasts of summer bleaching risk, (ii) explaining historical bleaching patterns, (iii) testing the bleaching-resistant quality of the current marine protected area (MPA) network (iv) identifying routinely monitored coral health attributes, such as the tissue energy reserves and skeletal growth characteristics (viz. density and extension rates) that correlate with bleaching resistant reef locations, and (v) targeting region-specific water quality improvement strategies that may increase reef-scale coral health and bleaching resistance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  2. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    Lin, C. S.; Cable, S. B.; Sutton, E. K.

    2014-12-01

    Accurate orbit prediction of space objects critically relies on modeling of thermospheric neutral density that determines drag force. In a trade study we have investigated a methodology to assess performances of neutral density models in predicting orbit against a baseline orbit trajectory. We use a metric defined as along-track error in a day a satellite is predicted to have for a given neutral density model when compared to its GPS positions. A set of ground truth data including Gravity Recovery and Climate Experiment (GRACE) accelerometer and GPS data, solar radio F10.7 proxy and magnetic activity measurements are used to calculate the baseline orbit. This approach is applied to compare the daily along-track errors among HASDM, JB08, MSISE-00 and DTM-2012 neutral density models. The dynamically calibrated HASDM model yields a daily along-track error close to the baseline error and lower than the other empirical models. Among the three empirical models (JB08, MSISE-00 and DTM-2012) the MSISE-00 model has produced the smallest daily along-track error. The results suggest that the developed metric and methodology could be used to assess overall errors in orbit prediction expected from empirical density models. They have also been adapted in an analysis tool Satellite Orbital Drag Error Estimator (SODEE) to estimate orbit prediction errors.

  3. A cosmological model with compact space sections and low mass density

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1982-01-01

    A general relativistic cosmological model is presented, which has closed space sections and mass density below a critical density similar to that of Friedmann's models. The model may predict double images of cosmic sources. (Author) [pt

  4. Linking density functional and mode coupling models for supercooled liquids

    OpenAIRE

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P.

    2015-01-01

    We compare predictions from two familiar models of the metastable supercooled liquid respectively constructed with thermodynamic and dynamic approach. In the so called density functional theory (DFT) the free energy $F[\\rho]$ of the liquid is a functional of the inhomogeneous density $\\rho({\\bf r})$. The metastable state is identified as a local minimum of $F[\\rho]$. The sharp density profile characterizing $\\rho({\\bf r})$ is identified as a single particle oscillator, whose frequency is obta...

  5. Tigers and their prey: Predicting carnivore densities from prey abundance

    OpenAIRE

    Karanth, K. Ullas; Nichols, James D.; Kumar, N. Samba; Link, William A.; Hines, James E.

    2004-01-01

    The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and...

  6. Accurate prediction of defect properties in density functional supercell calculations

    International Nuclear Information System (INIS)

    Lany, Stephan; Zunger, Alex

    2009-01-01

    The theoretical description of defects and impurities in semiconductors is largely based on density functional theory (DFT) employing supercell models. The literature discussion of uncertainties that limit the predictivity of this approach has focused mostly on two issues: (1) finite-size effects, in particular for charged defects; (2) the band-gap problem in local or semi-local DFT approximations. We here describe how finite-size effects (1) in the formation energy of charged defects can be accurately corrected in a simple way, i.e. by potential alignment in conjunction with a scaling of the Madelung-like screened first order correction term. The factor involved with this scaling depends only on the dielectric constant and the shape of the supercell, and quite accurately accounts for the full third order correction according to Makov and Payne. We further discuss in some detail the background and justification for this correction method, and also address the effect of the ionic screening on the magnitude of the image charge energy. In regard to (2) the band-gap problem, we discuss the merits of non-local external potentials that are added to the DFT Hamiltonian and allow for an empirical band-gap correction without significantly increasing the computational demand over that of standard DFT calculations. In combination with LDA + U, these potentials are further instrumental for the prediction of polaronic defects with localized holes in anion-p orbitals, such as the metal-site acceptors in wide-gap oxide semiconductors

  7. and density-dependent quark mass model

    Indian Academy of Sciences (India)

    We report on the study of the mass–radius (–) relation and the radial oscillations of magnetized proto strange stars. For the quark matter we have employed the very recent modification, the temperature- and density-dependent quark mass model of the well-known density-dependent quark mass model. We find that the ...

  8. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  9. Density contrast indicators in cosmological dust models

    Indian Academy of Sciences (India)

    measures introduced in [10]. 2. Density contrast indicators. Density contrast indicators are among the most useful indicators to measure the degree of inhomogeneity of cosmological models. Over the years a large number of such proposals have been put forward (see [10]), many of which however have the undesirable ...

  10. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids.

  11. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  12. Predictive Modeling of Black Spruce (Picea mariana (Mill. B.S.P. Wood Density Using Stand Structure Variables Derived from Airborne LiDAR Data in Boreal Forests of Ontario

    Directory of Open Access Journals (Sweden)

    Bharat Pokharel

    2016-12-01

    Full Text Available Our objective was to model the average wood density in black spruce trees in representative stands across a boreal forest landscape based on relationships with predictor variables extracted from airborne light detection and ranging (LiDAR point cloud data. Increment core samples were collected from dominant or co-dominant black spruce trees in a network of 400 m2 plots distributed among forest stands representing the full range of species composition and stand development across a 1,231,707 ha forest management unit in northeastern Ontario, Canada. Wood quality data were generated from optical microscopy, image analysis, X-ray densitometry and diffractometry as employed in SilviScan™. Each increment core was associated with a set of field measurements at the plot level as well as a suite of LiDAR-derived variables calculated on a 20 × 20 m raster from a wall-to-wall coverage at a resolution of ~1 point m−2. We used a multiple linear regression approach to identify important predictor variables and describe relationships between stand structure and wood density for average black spruce trees in the stands we observed. A hierarchical classification model was then fitted using random forests to make spatial predictions of mean wood density for average trees in black spruce stands. The model explained 39 percent of the variance in the response variable, with an estimated root mean square error of 38.8 (kg·m−3. Among the predictor variables, P20 (second decile LiDAR height in m and quadratic mean diameter were most important. Other predictors describing canopy depth and cover were of secondary importance and differed according to the modeling approach. LiDAR-derived variables appear to capture differences in stand structure that reflect different constraints on growth rates, determining the proportion of thin-walled earlywood cells in black spruce stems, and ultimately influencing the pattern of variation in important wood quality attributes

  13. Prediction of crystal densities of organic explosives by group additivity

    Energy Technology Data Exchange (ETDEWEB)

    Stine, J R

    1981-08-01

    The molar volume of crystalline organic compound is assumed to be a linear combination of its constituent volumes. Compounds consisting only of the elements hydrogen, carbon, nitrogen, oxygen, and fluorine are considered. The constituent volumes are taken to be the volumes of atoms in particular bonding environments and are evaluated from a large set of crystallographic data. The predicted density has an expected error of about 3%. These results are applied to a large number of explosives compounds.

  14. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  15. Thermospheric density and satellite drag modeling

    Science.gov (United States)

    Mehta, Piyush Mukesh

    The United States depends heavily on its space infrastructure for a vast number of commercial and military applications. Space Situational Awareness (SSA) and Threat Assessment require maintaining accurate knowledge of the orbits of resident space objects (RSOs) and the associated uncertainties. Atmospheric drag is the largest source of uncertainty for low-perigee RSOs. The uncertainty stems from inaccurate modeling of neutral atmospheric mass density and inaccurate modeling of the interaction between the atmosphere and the RSO. In order to reduce the uncertainty in drag modeling, both atmospheric density and drag coefficient (CD) models need to be improved. Early atmospheric density models were developed from orbital drag data or observations of a few early compact satellites. To simplify calculations, densities derived from orbit data used a fixed CD value of 2.2 measured in a laboratory using clean surfaces. Measurements from pressure gauges obtained in the early 1990s have confirmed the adsorption of atomic oxygen on satellite surfaces. The varying levels of adsorbed oxygen along with the constantly changing atmospheric conditions cause large variations in CD with altitude and along the orbit of the satellite. Therefore, the use of a fixed CD in early development has resulted in large biases in atmospheric density models. A technique for generating corrections to empirical density models using precision orbit ephemerides (POE) as measurements in an optimal orbit determination process was recently developed. The process generates simultaneous corrections to the atmospheric density and ballistic coefficient (BC) by modeling the corrections as statistical exponentially decaying Gauss-Markov processes. The technique has been successfully implemented in generating density corrections using the CHAMP and GRACE satellites. This work examines the effectiveness, specifically the transfer of density models errors into BC estimates, of the technique using the CHAMP and

  16. The role of station density for predicting daily runoff by top-kriging interpolation in Austria

    Directory of Open Access Journals (Sweden)

    Parajka Juraj

    2015-09-01

    Full Text Available Direct interpolation of daily runoff observations to ungauged sites is an alternative to hydrological model regionalisation. Such estimation is particularly important in small headwater basins characterized by sparse hydrological and climate observations, but often large spatial variability. The main objective of this study is to evaluate predictive accuracy of top-kriging interpolation driven by different number of stations (i.e. station densities in an input dataset. The idea is to interpolate daily runoff for different station densities in Austria and to evaluate the minimum number of stations needed for accurate runoff predictions. Top-kriging efficiency is tested for ten different random samples in ten different stations densities. The predictive accuracy is evaluated by ordinary cross-validation and full-sample crossvalidations. The methodology is tested by using 555 gauges with daily observations in the period 1987-1997. The results of the cross-validation indicate that, in Austria, top-kriging interpolation is superior to hydrological model regionalisation if station density exceeds approximately 2 stations per 1000 km2 (175 stations in Austria. The average median of Nash-Sutcliffe cross-validation efficiency is larger than 0.7 for densities above 2.4 stations/1000 km2. For such densities, the variability of runoff efficiency is very small over ten random samples. Lower runoff efficiency is found for low station densities (less than 1 station/1000 km2 and in some smaller headwater basins.

  17. Combinatorial nuclear level-density model

    International Nuclear Information System (INIS)

    Uhrenholt, H.; Åberg, S.; Dobrowolski, A.; Døssing, Th.; Ichikawa, T.; Möller, P.

    2013-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level-distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: level spacings at neutron separation energy, data on total level-density functions from the Oslo method, cumulative level densities from low-lying discrete states, and data on parity ratios. Spherical and deformed nuclei follow basically different coupling schemes, and we focus on deformed nuclei

  18. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  19. Predicting the morphological characteristics and basic density of Eucalyptus wood using the NIRS technique

    Directory of Open Access Journals (Sweden)

    Lívia Cássia Viana

    2009-12-01

    Full Text Available This work aimed to apply the near infrared spectroscopy technique (NIRS for fast prediction of basic density and morphological characteristics of wood fibers in Eucalyptus clones. Six Eucalyptus clones aged three years were used, obtained from plantations in Cocais, Guanhães, Rio Doce and Santa Bárbara, in Minas Gerais state. The morphological characteristics of the fibers and basic density of the wood were determined by conventional methods and correlated with near infrared spectra using partial least square regression (PLS regression. Best calibration correlations were obtained in basic density prediction, with values 0.95 for correlation coefficient of cross validation (Rcv and 3.4 for ratio performance deviation (RPD, in clone 57. Fiber length can be predicted by models with Rcv ranging from 0.61 to 0.89 and standard error (SECV ranging from 0.037 to 0.079 mm. The prediction model for wood fiber width presented higher Rcv (0.82 and RPD (1.9 values in clone 1046. Best fits to estimate lumen diameter and fiber wall thickness were obtained with information from clone 1046. In some clones, the NIRS technique proved efficient to predict the anatomical properties and basic density of wood in Eucalyptus clones.

  20. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  1. Models for Experimental High Density Housing

    Science.gov (United States)

    Bradecki, Tomasz; Swoboda, Julia; Nowak, Katarzyna; Dziechciarz, Klaudia

    2017-10-01

    The article presents the effects of research on models of high density housing. The authors present urban projects for experimental high density housing estates. The design was based on research performed on 38 examples of similar housing in Poland that have been built after 2003. Some of the case studies show extreme density and that inspired the researchers to test individual virtual solutions that would answer the question: How far can we push the limits? The experimental housing projects show strengths and weaknesses of design driven only by such indexes as FAR (floor attenuation ratio - housing density) and DPH (dwellings per hectare). Although such projects are implemented, the authors believe that there are reasons for limits since high index values may be in contradiction to the optimum character of housing environment. Virtual models on virtual plots presented by the authors were oriented toward maximising the DPH index and DAI (dwellings area index) which is very often the main driver for developers. The authors also raise the question of sustainability of such solutions. The research was carried out in the URBAN model research group (Gliwice, Poland) that consists of academic researchers and architecture students. The models reflect architectural and urban regulations that are valid in Poland. Conclusions might be helpful for urban planners, urban designers, developers, architects and architecture students.

  2. Density contrast indicators in cosmological dust models

    Indian Academy of Sciences (India)

    We study the evolution of these indicators with time in the context of inhomogeneous Szekeres models. We find that different observers (having either different spatial locations or different indicators) see different evolutions for the density contrast, which may or may not be monotonically increasing with time. We also find that ...

  3. Predicting Intra-Urban Population Densities in Africa using SAR and Optical Remote Sensing Data

    Science.gov (United States)

    Linard, C.; Steele, J.; Forget, Y.; Lopez, J.; Shimoni, M.

    2017-12-01

    The population of Africa is predicted to double over the next 40 years, driving profound social, environmental and epidemiological changes within rapidly growing cities. Estimations of within-city variations in population density must be improved in order to take urban heterogeneities into account and better help urban research and decision making, especially for vulnerability and health assessments. Satellite remote sensing offers an effective solution for mapping settlements and monitoring urbanization at different spatial and temporal scales. In Africa, the urban landscape is covered by slums and small houses, where the heterogeneity is high and where the man-made materials are natural. Innovative methods that combine optical and SAR data are therefore necessary for improving settlement mapping and population density predictions. An automatic method was developed to estimate built-up densities using recent and archived optical and SAR data and a multi-temporal database of built-up densities was produced for 48 African cities. Geo-statistical methods were then used to study the relationships between census-derived population densities and satellite-derived built-up attributes. Best predictors were combined in a Random Forest framework in order to predict intra-urban variations in population density in any large African city. Models show significant improvement of our spatial understanding of urbanization and urban population distribution in Africa in comparison to the state of the art.

  4. Predicting insect migration density and speed in the daytime convective boundary layer.

    Directory of Open Access Journals (Sweden)

    James R Bell

    Full Text Available Insect migration needs to be quantified if spatial and temporal patterns in populations are to be resolved. Yet so little ecology is understood above the flight boundary layer (i.e. >10 m where in north-west Europe an estimated 3 billion insects km(-1 month(-1 comprising pests, beneficial insects and other species that contribute to biodiversity use the atmosphere to migrate. Consequently, we elucidate meteorological mechanisms principally related to wind speed and temperature that drive variation in daytime aerial density and insect displacements speeds with increasing altitude (150-1200 m above ground level. We derived average aerial densities and displacement speeds of 1.7 million insects in the daytime convective atmospheric boundary layer using vertical-looking entomological radars. We first studied patterns of insect aerial densities and displacements speeds over a decade and linked these with average temperatures and wind velocities from a numerical weather prediction model. Generalized linear mixed models showed that average insect densities decline with increasing wind speed and increase with increasing temperatures and that the relationship between displacement speed and density was negative. We then sought to derive how general these patterns were over space using a paired site approach in which the relationship between sites was examined using simple linear regression. Both average speeds and densities were predicted remotely from a site over 100 km away, although insect densities were much noisier due to local 'spiking'. By late morning and afternoon when insects are migrating in a well-developed convective atmosphere at high altitude, they become much more difficult to predict remotely than during the early morning and at lower altitudes. Overall, our findings suggest that predicting migrating insects at altitude at distances of ≈ 100 km is promising, but additional radars are needed to parameterise spatial covariance.

  5. Linking density functional and mode coupling models for supercooled liquids.

    Science.gov (United States)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  6. Linking density functional and mode coupling models for supercooled liquids

    Energy Technology Data Exchange (ETDEWEB)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P. [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067 (India)

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  7. Predicting moisture content and density distribution of Scots pine by microwave scanning of sawn timber

    International Nuclear Information System (INIS)

    Johansson, J.; Hagman, O.; Fjellner, B.A.

    2003-01-01

    This study was carried out to investigate the possibility of calibrating a prediction model for the moisture content and density distribution of Scots pine (Pinus sylvestris) using microwave sensors. The material was initially of green moisture content and was thereafter dried in several steps to zero moisture content. At each step, all the pieces were weighed, scanned with a microwave sensor (Satimo 9,4GHz), and computed tomography (CT)-scanned with a medical CT scanner (Siemens Somatom AR.T.). The output variables from the microwave sensor were used as predictors, and CT images that correlated with known moisture content were used as response variables. Multivariate models to predict average moisture content and density were calibrated using the partial least squares (PLS) regression. The models for average moisture content and density were applied at the pixel level, and the distribution was visualized. The results show that it is possible to predict both moisture content distribution and density distribution with high accuracy using microwave sensors. (author)

  8. Axial and appendicular bone density predict fractures in older women

    Science.gov (United States)

    Black, D. M.; Cummings, S. R.; Genant, H. K.; Nevitt, M. C.; Palermo, L.; Browner, W.

    1992-01-01

    To determine whether measurement of hip and spine bone mass by dual-energy x-ray absorptiometry (DEXA) predicts fractures in women and to compare the predictive value of DEXA with that of single-photon absorptiometry (SPA) of appendicular sites, we prospectively studied 8134 nonblack women age 65 years and older who had both DEXA and SPA measurements of bone mass. A total of 208 nonspine fractures, including 37 wrist fractures, occurred during the follow-up period, which averaged 0.7 years. The risk of fracture was inversely related to bone density at all measurement sites. After adjusting for age, the relative risks per decrease of 1 standard deviation in bone density for the occurrence of any fracture was 1.40 for measurement at the proximal femur (95% confidence interval 1.20-1.63) and 1.35 (1.15-1.58) for measurement at the spine. Results were similar for all regions of the proximal femur as well as SPA measurements at the calcaneus, distal radius, and proximal radius. None of these measurements was a significantly better predictor of fractures than the others. Furthermore, measurement of the distal radius was not a better predictor of wrist fracture (relative risk 1.64: 95% CI 1.13-2.37) than other sites, such as the lumbar spine (RR 1.56; CI 1.07-2.26), the femoral neck (RR 1.65; CI 1.12-2.41), or the calcaneus (RR 1.83; CI 1.26-2.64). We conclude that the inverse relationship between bone mass and risk of fracture in older women is similar for absorptiometric measurements made at the hip, spine, and appendicular sites.

  9. Prediction of nanofluids properties: the density and the heat capacity

    Science.gov (United States)

    Zhelezny, V. P.; Motovoy, I. V.; Ustyuzhanin, E. E.

    2017-11-01

    The results given in this report show that the additives of Al2O3 nanoparticles lead to increase the density and decrease the heat capacity of isopropanol. Based on the experimental data the excess molar volume and the excess molar heat capacity were calculated. The report suggests new method for predicting the molar volume and molar heat capacity of nanofluids. It is established that the values of the excess thermodynamic functions are determined by the properties and the volume of the structurally oriented layers of the base fluid molecules near the surface of nanoparticles. The heat capacity of the structurally oriented layers of the base fluid is less than the heat capacity of the base fluid for given parameters due to the greater regulation of its structure. It is shown that information on the geometric dimensions of the structured layers of the base fluid near nanoparticles can be obtained from data on the nanofluids density and at ambient temperature – by the dynamic light scattering method. For calculations of the nanofluids heat capacity over a wide range of temperatures a new correlation based on the extended scaling is proposed.

  10. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  11. Modeling of microcrack density based damage evolution in ceramic rods

    International Nuclear Information System (INIS)

    Grove, D.J.; Rajendran, A.M.

    2000-01-01

    This paper presents results from simulations of shock wave propagation in ceramic rods with and without confinement. The experiments involved steel and graded-density flyer plates impacting sleeved and unsleeved AD995 ceramic rods. The main objectives of simulating these experiments were: 1) to validate the Rajendran-Grove (RG) ceramic model constants, and 2) to investigate the effects of confinement on damage evolution in ceramic rods, as predicted by the RG model. While the experimental measurements do not indicate the details of damage evolution in the ceramic rod, the numerical modeling has provided some valuable insight into the damage initiation and propagation processes in ceramic rods

  12. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  13. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  14. Teaching Chemistry with Electron Density Models

    Science.gov (United States)

    Shusterman, Gwendolyn P.; Shusterman, Alan J.

    1997-07-01

    Linus Pauling once said that a topic must satisfy two criteria before it can be taught to students. First, students must be able to assimilate the topic within a reasonable amount of time. Second, the topic must be relevant to the educational needs and interests of the students. Unfortunately, the standard general chemistry textbook presentation of "electronic structure theory", set as it is in the language of molecular orbitals, has a difficult time satisfying either criterion. Many of the quantum mechanical aspects of molecular orbitals are too difficult for most beginning students to appreciate, much less master, and the few applications that are presented in the typical textbook are too limited in scope to excite much student interest. This article describes a powerful new method for teaching students about electronic structure and its relevance to chemical phenomena. This method, which we have developed and used for several years in general chemistry (G.P.S.) and organic chemistry (A.J.S.) courses, relies on computer-generated three-dimensional models of electron density distributions, and largely satisfies Pauling's two criteria. Students find electron density models easy to understand and use, and because these models are easily applied to a broad range of topics, they successfully convey to students the importance of electronic structure. In addition, when students finally learn about orbital concepts they are better prepared because they already have a well-developed three-dimensional picture of electronic structure to fall back on. We note in this regard that the types of models we use have found widespread, rigorous application in chemical research (1, 2), so students who understand and use electron density models do not need to "unlearn" anything before progressing to more advanced theories.

  15. Disentangling density-dependent dynamics using full annual cycle models and Bayesian model weight updating

    Science.gov (United States)

    Robinson, Orin J.; McGowan, Conor P.; Devers, Patrick K.

    2017-01-01

    Density dependence regulates populations of many species across all taxonomic groups. Understanding density dependence is vital for predicting the effects of climate, habitat loss and/or management actions on wild populations. Migratory species likely experience seasonal changes in the relative influence of density dependence on population processes such as survival and recruitment throughout the annual cycle. These effects must be accounted for when characterizing migratory populations via population models.To evaluate effects of density on seasonal survival and recruitment of a migratory species, we used an existing full annual cycle model framework for American black ducks Anas rubripes, and tested different density effects (including no effects) on survival and recruitment. We then used a Bayesian model weight updating routine to determine which population model best fit observed breeding population survey data between 1990 and 2014.The models that best fit the survey data suggested that survival and recruitment were affected by density dependence and that density effects were stronger on adult survival during the breeding season than during the non-breeding season.Analysis also suggests that regulation of survival and recruitment by density varied over time. Our results showed that different characterizations of density regulations changed every 8–12 years (three times in the 25-year period) for our population.Synthesis and applications. Using a full annual cycle, modelling framework and model weighting routine will be helpful in evaluating density dependence for migratory species in both the short and long term. We used this method to disentangle the seasonal effects of density on the continental American black duck population which will allow managers to better evaluate the effects of habitat loss and potential habitat management actions throughout the annual cycle. The method here may allow researchers to hone in on the proper form and/or strength of

  16. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  17. CT Measured Psoas Density Predicts Outcomes After Enterocutaneous Fistula Repair

    Science.gov (United States)

    Lo, Wilson D.; Evans, David C.; Yoo, Taehwan

    2018-01-01

    Background Low muscle mass and quality are associated with poor surgical outcomes. We evaluated CT measured psoas muscle density as a marker of muscle quality and physiologic reserve, and hypothesized that it would predict outcomes after enterocutaneous fistula repair (ECF). Methods We conducted a retrospective cohort study of patients 18 – 90 years old with ECF failing non-operative management requiring elective operative repair at Ohio State University from 2005 – 2016 that received a pre-operative abdomen/pelvis CT with intravenous contrast within 3 months of their operation. Psoas Hounsfield Unit average calculation (HUAC) were measured at the L3 level. 1 year leak rate, 90 day, 1 year, and 3 year mortality, complication risk, length of stay, dependent discharge, and 30 day readmission were compared to HUAC. Results 100 patients met inclusion criteria. Patients were stratified into interquartile (IQR) ranges based on HUAC. The lowest HUAC IQR was our low muscle quality (LMQ) cutoff, and was associated with 1 year leak (OR 3.50, p < 0.01), 1 year (OR 2.95, p < 0.04) and 3 year mortality (OR 3.76, p < 0.01), complication risk (OR 14.61, p < 0.01), and dependent discharge (OR 4.07, p < 0.01) compared to non-LMQ patients. Conclusions Psoas muscle density is a significant predictor of poor outcomes in ECF repair. This readily available measure of physiologic reserve can identify patients with ECF on pre-operative evaluation that have significantly increased risk that may benefit from additional interventions and recovery time to mitigate risk before operative repair. PMID:29505144

  18. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  20. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  1. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  2. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related density functions has been found to be quite appropriate. A unique single unified theoretical framework that emerges through the density concept at these ...

  3. Modelling interactions of toxicants and density dependence in wildlife populations

    Science.gov (United States)

    Schipper, Aafke M.; Hendriks, Harrie W.M.; Kauffman, Matthew J.; Hendriks, A. Jan; Huijbregts, Mark A.J.

    2013-01-01

    1. A major challenge in the conservation of threatened and endangered species is to predict population decline and design appropriate recovery measures. However, anthropogenic impacts on wildlife populations are notoriously difficult to predict due to potentially nonlinear responses and interactions with natural ecological processes like density dependence. 2. Here, we incorporated both density dependence and anthropogenic stressors in a stage-based matrix population model and parameterized it for a density-dependent population of peregrine falcons Falco peregrinus exposed to two anthropogenic toxicants [dichlorodiphenyldichloroethylene (DDE) and polybrominated diphenyl ethers (PBDEs)]. Log-logistic exposure–response relationships were used to translate toxicant concentrations in peregrine falcon eggs to effects on fecundity. Density dependence was modelled as the probability of a nonbreeding bird acquiring a breeding territory as a function of the current number of breeders. 3. The equilibrium size of the population, as represented by the number of breeders, responded nonlinearly to increasing toxicant concentrations, showing a gradual decrease followed by a relatively steep decline. Initially, toxicant-induced reductions in population size were mitigated by an alleviation of the density limitation, that is, an increasing probability of territory acquisition. Once population density was no longer limiting, the toxicant impacts were no longer buffered by an increasing proportion of nonbreeders shifting to the breeding stage, resulting in a strong decrease in the equilibrium number of breeders. 4. Median critical exposure concentrations, that is, median toxicant concentrations in eggs corresponding with an equilibrium population size of zero, were 33 and 46 μg g−1 fresh weight for DDE and PBDEs, respectively. 5. Synthesis and applications. Our modelling results showed that particular life stages of a density-limited population may be relatively insensitive to

  4. Assessment of two mammographic density related features in predicting near-term breast cancer risk

    Science.gov (United States)

    Zheng, Bin; Sumkin, Jules H.; Zuley, Margarita L.; Wang, Xingwei; Klym, Amy H.; Gur, David

    2012-02-01

    In order to establish a personalized breast cancer screening program, it is important to develop risk models that have high discriminatory power in predicting the likelihood of a woman developing an imaging detectable breast cancer in near-term (e.g., BIRADS), and computed mammographic density related features we compared classification performance in estimating the likelihood of detecting cancer during the subsequent examination using areas under the ROC curves (AUC). The AUCs were 0.63+/-0.03, 0.54+/-0.04, 0.57+/-0.03, 0.68+/-0.03 when using woman's age, BIRADS rating, computed mean density and difference in computed bilateral mammographic density, respectively. Performance increased to 0.62+/-0.03 and 0.72+/-0.03 when we fused mean and difference in density with woman's age. The results suggest that, in this study, bilateral mammographic tissue density is a significantly stronger (p<0.01) risk indicator than both woman's age and mean breast density.

  5. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  6. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  7. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  8. Ultrasonic vibration-assisted (UV-A) pelleting of wheat straw: a constitutive model for pellet density.

    Science.gov (United States)

    Song, Xiaoxu; Zhang, Meng; Pei, Z J; Wang, Donghai

    2015-07-01

    Ultrasonic vibration-assisted (UV-A) pelleting can increase cellulosic biomass density and reduce biomass handling and transportation costs in cellulosic biofuel manufacturing. Effects of input variables on pellet density in UV-A pelleting have been studied experimentally. However, there are no reports on modeling of pellet density in UV-A pelleting. Furthermore, in the literature, most reported density models in other pelleting methods of biomass are empirical. This paper presents a constitutive model to predict pellet density in UV-A pelleting. With the predictive model, relations between input variables (ultrasonic power and pelleting pressure) and pellet density are predicted. The predicted relations are compared with those determined experimentally in the literature. Model predictions agree well with reported experimental results. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Predicting the density and viscosity of Biodisel - diesel blends

    International Nuclear Information System (INIS)

    Aleksovski, Slavcho A.; Miteva, Karmina K.

    2010-01-01

    In this study, Biodisel produced from rapeseed oil was blended with commercially available diesel fuel at ratios of 2, 6, 8, 10, 20, 50 and 75 % on a volume basis. In order to analyze the key fuel properties such as density and viscosity, the experiments were carried out at various temperatures. Obtained results from Biodisel blends were compared with the properties of fossil diesel fuel. According to the results, the density of the blends proportionally increases with Biodisel fraction and decreases with temperature. The proposed empirical equation showed excellent agreement between the measured densities and estimated values. Viscosity of the Biodisel blends increased with the increase of Biodisel fraction in the fuel blend. The experimental data were correlated as a function of the Biodisel fraction by the empirical second-degree equation. Very good agreement between experimental and estimated values was observed.

  10. Prediction of crack density and electrical resistance changes in indium tin oxide/polymer thin films under tensile loading

    KAUST Repository

    Mora Cordova, Angel

    2014-06-11

    We present unified predictions for the crack onset strain, evolution of crack density, and changes in electrical resistance in indium tin oxide/polymer thin films under tensile loading. We propose a damage mechanics model to quantify and predict such changes as an alternative to fracture mechanics formulations. Our predictions are obtained by assuming that there are no flaws at the onset of loading as opposed to the assumptions of fracture mechanics approaches. We calibrate the crack onset strain and the damage model based on experimental data reported in the literature. We predict crack density and changes in electrical resistance as a function of the damage induced in the films. We implement our model in the commercial finite element software ABAQUS using a user subroutine UMAT. We obtain fair to good agreement with experiments. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  11. Nuclear symmetry energy in density dependent hadronic models

    International Nuclear Information System (INIS)

    Haddad, S.

    2008-12-01

    The density dependence of the symmetry energy and the correlation between parameters of the symmetry energy and the neutron skin thickness in the nucleus 208 Pb are investigated in relativistic Hadronic models. The dependency of the symmetry energy on density is linear around saturation density. Correlation exists between the neutron skin thickness in the nucleus 208 Pb and the value of the nuclear symmetry energy at saturation density, but not with the slope of the symmetry energy at saturation density. (author)

  12. Neural Networks for Predicting Conditional Probability Densities: Improved Training Scheme Combining EM and RVFL.

    Science.gov (United States)

    Taylor, John G.; Husmeier, Dirk

    1998-01-01

    Predicting conditional probability densities with neural networks requires complex (at least two-hidden-layer) architectures, which normally leads to rather long training times. By adopting the RVFL concept and constraining a subset of the parameters to randomly chosen initial values (such that the EM-algorithm can be applied), the training process can be accelerated by about two orders of magnitude. This allows training of a whole ensemble of networks at the same computational costs as would be required otherwise for training a single model. The simulations performed suggest that in this way a significant improvement of the generalization performance can be achieved. Copyright 1997 Elsevier Science Ltd.

  13. Neighborhood Density and Word Frequency Predict Vocabulary Size in Toddlers

    Science.gov (United States)

    Stokes, Stephanie F.

    2010-01-01

    Purpose: To document the lexical characteristics of neighborhood density (ND) and word frequency (WF) in the lexicons of a large sample of English-speaking toddlers. Method: Parents of 222 British-English-speaking children aged 27([plus or minus]3) months completed a British adaptation of the MacArthur-Bates Communicative Development Inventory:…

  14. On Gravity Prediction Using Density and Seismic Data

    Science.gov (United States)

    1989-07-01

    Upper Mantle, Geophs. Monograph No.13, pp. 18-36. BOTT, M.H.P. (1971): The Interior of the Earth. Edward Arnold , London. 139 BRANDSTATTER, G. (1987...DRISLER, J., W.R. JACOBY (1983): Gravity Anomaly and Density Distributions of the Rhenish Massif In: Fuchs, v. Gehlen , Malzer, Murawski, Semmel (Eds

  15. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  16. Transverse charge and magnetization densities: Improved chiral predictions down to b=1 fms

    Energy Technology Data Exchange (ETDEWEB)

    Alarcon, Jose Manuel [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Hiller Blin, Astrid N. [Johannes Gutenberg Univ., Mainz (Germany); Vicente Vacas, Manuel J. [Spanish National Research Council (CSIC), Valencia (Spain). Univ. of Valencia (UV), Inst. de Fisica Corpuscular; Weiss, Christian [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2018-03-01

    The transverse charge and magnetization densities provide insight into the nucleon’s inner structure. In the periphery, the isovector components are clearly dominant, and can be computed in a model-independent way by means of a combination of chiral effective field theory (cEFT) and dispersion analysis. With a novel N=D method, we incorporate the pion electromagnetic formfactor data into the cEFT calculation, thus taking into account the pion-rescattering effects and r-meson pole. As a consequence, we are able to reliably compute the densities down to distances b1 fm, therefore achieving a dramatic improvement of the results compared to traditional cEFT calculations, while remaining predictive and having controlled uncertainties.

  17. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    Science.gov (United States)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  18. Compensation in Root Water Uptake Models Combined with Three-Dimensional Root Length Density Distribution

    NARCIS (Netherlands)

    Heinen, M.

    2014-01-01

    A three-dimensional root length density distribution function is introduced that made it possible to compare two empirical uptake models with a more mechanistic uptake model. Adding a compensation component to the more empirical model resulted in predictions of root water uptake distributions

  19. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  20. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  1. Neighborhood density and word frequency predict vocabulary size in toddlers.

    Science.gov (United States)

    Stokes, Stephanie F

    2010-06-01

    To document the lexical characteristics of neighborhood density (ND) and word frequency (WF) in the lexicons of a large sample of English-speaking toddlers. Parents of 222 British-English-speaking children aged 27(+/-3) months completed a British adaptation of the MacArthur-Bates Communicative Development Inventory: Words and Sentences (MCDI; Klee & Harrison, 2001). Child words were coded for ND and WF, and the relationships among vocabulary, ND, and WF were examined. A cut-point of -1 SD below the mean on the MCDI classified children into one of two groups: low or high vocabulary size. Group differences on ND and WF were examined using nonparametric statistics. In a hierarchical regression, ND and WF accounted for 47% and 14% of unique variance in MCDI scores, respectively. Low-vocabulary children scored significantly higher on ND and significantly lower on WF than did high-vocabulary children, but there was more variability in ND and WF for children at the lowest points of the vocabulary continuum. Children at the lowest points of a continuum of vocabulary size may be extracting statistical properties of the input language in a manner quite different from their more able age peers.

  2. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  3. Predictive mapping for tree sizes and densities in southeast Alaska.

    Science.gov (United States)

    John P. Caouette; Eugene J. DeGayner

    2005-01-01

    The Forest Service has relied on a single forest measure, timber volume, to meet many management and planning information needs in southeast Alaska. This economic-based categorization of forest types tends to mask critical information relevant to other contemporary forest-management issues, such as modeling forest structure, ecosystem diversity, or wildlife habitat. We...

  4. Toxicity prediction of ionic liquids based on Daphnia magna by using density functional theory

    Science.gov (United States)

    Nu’aim, M. N.; Bustam, M. A.

    2018-04-01

    By using a model called density functional theory, the toxicity of ionic liquids can be predicted and forecast. It is a theory that allowing the researcher to have a substantial tool for computation of the quantum state of atoms, molecules and solids, and molecular dynamics which also known as computer simulation method. It can be done by using structural feature based quantum chemical reactivity descriptor. The identification of ionic liquids and its Log[EC50] data are from literature data that available in Ismail Hossain thesis entitled “Synthesis, Characterization and Quantitative Structure Toxicity Relationship of Imidazolium, Pyridinium and Ammonium Based Ionic Liquids”. Each cation and anion of the ionic liquids were optimized and calculated. The geometry optimization and calculation from the software, produce the value of highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO). From the value of HOMO and LUMO, the value for other toxicity descriptors were obtained according to their formulas. The toxicity descriptor that involves are electrophilicity index, HOMO, LUMO, energy gap, chemical potential, hardness and electronegativity. The interrelation between the descriptors are being determined by using a multiple linear regression (MLR). From this MLR, all descriptors being analyzed and the descriptors that are significant were chosen. In order to develop the finest model equation for toxicity prediction of ionic liquids, the selected descriptors that are significant were used. The validation of model equation was performed with the Log[EC50] data from the literature and the final model equation was developed. A bigger range of ionic liquids which nearly 108 of ionic liquids can be predicted from this model equation.

  5. Predictions of new AB O3 perovskite compounds by combining machine learning and density functional theory

    Science.gov (United States)

    Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex

    2018-04-01

    We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to

  6. Densities and isothermal compressibilities of ionic liquids - Modelling and application

    DEFF Research Database (Denmark)

    Abildskov, Jens; Ellegaard, Martin Dela; O’Connell, J.P.

    2010-01-01

    Two corresponding-states forms have been developed for direct correlation function integrals in liquids to represent pressure effects on the volume of ionic liquids over wide ranges of temperature and pressure. The correlations can be analytically integrated from a chosen reference density...... to provide a full equation of state for ionic liquids over reduced densities from 1.5 to more than 3.6. One approach is empirical with 3 parameters, the other is a 2-parameter theoretical form which is directly connected to a method for predicting gas solubilities in ionic liquids. Parameters for both...... to an entirely predictive method for ambient pressure densities and densities of compressed ionic liquids. Extensive comparisons are made with other techniques....

  7. Modelling CO2-Brine Interfacial Tension using Density Gradient Theory

    KAUST Repository

    Ruslan, Mohd Fuad Anwari Che

    2018-03-01

    Knowledge regarding carbon dioxide (CO2)-brine interfacial tension (IFT) is important for petroleum industry and Carbon Capture and Storage (CCS) strategies. In petroleum industry, CO2-brine IFT is especially importance for CO2 – based enhanced oil recovery strategy as it affects phase behavior and fluid transport in porous media. CCS which involves storing CO2 in geological storage sites also requires understanding regarding CO2-brine IFT as this parameter affects CO2 quantity that could be securely stored in the storage site. Several methods have been used to compute CO2-brine interfacial tension. One of the methods employed is by using Density Gradient Theory (DGT) approach. In DGT model, IFT is computed based on the component density distribution across the interface. However, current model is only applicable for modelling low to medium ionic strength solution. This limitation is due to the model only considers the increase of IFT due to the changes of bulk phases properties and does not account for ion distribution at interface. In this study, a new modelling strategy to compute CO2-brine IFT based on DGT was proposed. In the proposed model, ion distribution across interface was accounted for by separating the interface to two sections. The saddle point of tangent plane distance where ( ) was defined as the boundary separating the two sections of the interface. Electrolyte is assumed to be present only in the second section which is connected to the bulk liquid phase side. Numerical simulations were performed using the proposed approach for single and mixed salt solutions for three salts (NaCl, KCl, and CaCl2), for temperature (298 K to 443 K), pressure (2 MPa to 70 MPa), and ionic strength (0.085 mol·kg-1 to 15 mol·kg-1). The simulation result shows that the tuned model was able to predict with good accuracy CO2-brine IFT for all studied cases. Comparison with current DGT model showed that the proposed approach yields better match with the experiment data

  8. Habitat-Based Density Models for Three Cetacean Species off Southern California Illustrate Pronounced Seasonal Differences

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2017-05-01

    Full Text Available Managing marine species effectively requires spatially and temporally explicit knowledge of their density and distribution. Habitat-based density models, a type of species distribution model (SDM that uses habitat covariates to estimate species density and distribution patterns, are increasingly used for marine management and conservation because they provide a tool for assessing potential impacts (e.g., from fishery bycatch, ship strikes, anthropogenic sound over a variety of spatial and temporal scales. The abundance and distribution of many pelagic species exhibit substantial seasonal variability, highlighting the importance of predicting density specific to the season of interest. This is particularly true in dynamic regions like the California Current, where significant seasonal shifts in cetacean distribution have been documented at coarse scales. Finer scale (10 km habitat-based density models were previously developed for many cetacean species occurring in this region, but most models were limited to summer/fall. The objectives of our study were two-fold: (1 develop spatially-explicit density estimates for winter/spring to support management applications, and (2 compare model-predicted density and distribution patterns to previously developed summer/fall model results in the context of species ecology. We used a well-established Generalized Additive Modeling framework to develop cetacean SDMs based on 20 California Cooperative Oceanic Fisheries Investigations (CalCOFI shipboard surveys conducted during winter and spring between 2005 and 2015. Models were fit for short-beaked common dolphin (Delphinus delphis delphis, Dall's porpoise (Phocoenoides dalli, and humpback whale (Megaptera novaeangliae. Model performance was evaluated based on a variety of established metrics, including the percentage of explained deviance, ratios of observed to predicted density, and visual inspection of predicted and observed distributions. Final models were

  9. Predicting the relative binding affinity of mineralocorticoid receptor antagonists by density functional methods

    Science.gov (United States)

    Roos, Katarina; Hogner, Anders; Ogg, Derek; Packer, Martin J.; Hansson, Eva; Granberg, Kenneth L.; Evertsson, Emma; Nordqvist, Anneli

    2015-12-01

    In drug discovery, prediction of binding affinity ahead of synthesis to aid compound prioritization is still hampered by the low throughput of the more accurate methods and the lack of general pertinence of one method that fits all systems. Here we show the applicability of a method based on density functional theory using core fragments and a protein model with only the first shell residues surrounding the core, to predict relative binding affinity of a matched series of mineralocorticoid receptor (MR) antagonists. Antagonists of MR are used for treatment of chronic heart failure and hypertension. Marketed MR antagonists, spironolactone and eplerenone, are also believed to be highly efficacious in treatment of chronic kidney disease in diabetes patients, but is contra-indicated due to the increased risk for hyperkalemia. These findings and a significant unmet medical need among patients with chronic kidney disease continues to stimulate efforts in the discovery of new MR antagonist with maintained efficacy but low or no risk for hyperkalemia. Applied on a matched series of MR antagonists the quantum mechanical based method gave an R2 = 0.76 for the experimental lipophilic ligand efficiency versus relative predicted binding affinity calculated with the M06-2X functional in gas phase and an R2 = 0.64 for experimental binding affinity versus relative predicted binding affinity calculated with the M06-2X functional including an implicit solvation model. The quantum mechanical approach using core fragments was compared to free energy perturbation calculations using the full sized compound structures.

  10. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  11. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  12. Fracture Risk Prediction Using Phalangeal Bone Mineral Density or FRAX(®)?

    DEFF Research Database (Denmark)

    Friis-Holmberg, Teresa; Rubin, Katrine Hass; Brixen, Kim

    2014-01-01

    In this prospective study, we investigated the ability of Fracture Risk Assessment Tool (FRAX), phalangeal bone mineral density (BMD), and age alone to predict fractures using data from a Danish cohort study, Danish Health Examination Survey 2007-2008, including men (n = 5206) and women (n = 7552...... variables performed overall best in the prediction of major osteoporotic fractures. In predicting hip fractures, there was a tendency of T-score performing worse than the other methods....

  13. NOx, Soot, and Fuel Consumption Predictions under Transient Operating Cycle for Common Rail High Power Density Diesel Engines

    Directory of Open Access Journals (Sweden)

    N. H. Walke

    2016-01-01

    Full Text Available Diesel engine is presently facing the challenge of controlling NOx and soot emissions on transient cycles, to meet stricter emission norms and to control emissions during field operations. Development of a simulation tool for NOx and soot emissions prediction on transient operating cycles has become the most important objective, which can significantly reduce the experimentation time and cost required for tuning these emissions. Hence, in this work, a 0D comprehensive predictive model has been formulated with selection and coupling of appropriate combustion and emissions models to engine cycle models. Selected combustion and emissions models are further modified to improve their prediction accuracy in the full operating zone. Responses of the combustion and emissions models have been validated for load and “start of injection” changes. Model predicted transient fuel consumption, air handling system parameters, and NOx and soot emissions are in good agreement with measured data on a turbocharged high power density common rail engine for the “nonroad transient cycle” (NRTC. It can be concluded that 0D models can be used for prediction of transient emissions on modern engines. How the formulated approach can also be extended to transient emissions prediction for other applications and fuels is also discussed.

  14. Whole-brain grey matter density predicts balance stability irrespective of age and protects older adults from falling.

    Science.gov (United States)

    Boisgontier, Matthieu P; Cheval, Boris; van Ruitenbeek, Peter; Levin, Oron; Renaud, Olivier; Chanal, Julien; Swinnen, Stephan P

    2016-03-01

    Functional and structural imaging studies have demonstrated the involvement of the brain in balance control. Nevertheless, how decisive grey matter density and white matter microstructural organisation are in predicting balance stability, and especially when linked to the effects of ageing, remains unclear. Standing balance was tested on a platform moving at different frequencies and amplitudes in 30 young and 30 older adults, with eyes open and with eyes closed. Centre of pressure variance was used as an indicator of balance instability. The mean density of grey matter and mean white matter microstructural organisation were measured using voxel-based morphometry and diffusion tensor imaging, respectively. Mixed-effects models were built to analyse the extent to which age, grey matter density, and white matter microstructural organisation predicted balance instability. Results showed that both grey matter density and age independently predicted balance instability. These predictions were reinforced when the level of difficulty of the conditions increased. Furthermore, grey matter predicted balance instability beyond age and at least as consistently as age across conditions. In other words, for balance stability, the level of whole-brain grey matter density is at least as decisive as being young or old. Finally, brain grey matter appeared to be protective against falls in older adults as age increased the probability of losing balance in older adults with low, but not moderate or high grey matter density. No such results were observed for white matter microstructural organisation, thereby reinforcing the specificity of our grey matter findings. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Prediction of Reduction Potentials of Copper Proteins with Continuum Electrostatics and Density Functional Theory.

    Science.gov (United States)

    Fowler, Nicholas J; Blanford, Christopher F; Warwicker, Jim; de Visser, Sam P

    2017-11-02

    Blue copper proteins, such as azurin, show dramatic changes in Cu 2+ /Cu + reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high-level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long-range electrostatic changes and hence can be modeled accurately with continuum electrostatics. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  16. MODEL COMPARISON FOR THE DENSITY STRUCTURE ACROSS SOLAR CORONAL WAVEGUIDES

    Energy Technology Data Exchange (ETDEWEB)

    Arregui, I.; Asensio Ramos, A. [Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna, Tenerife (Spain); Soler, R., E-mail: iarregui@iac.es [Solar Physics Group, Departament de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca (Spain)

    2015-10-01

    The spatial variation of physical quantities, such as the mass density, across solar atmospheric waveguides governs the timescales and spatial scales for wave damping and energy dissipation. The direct measurement of the spatial distribution of density, however, is difficult, and indirect seismology inversion methods have been suggested as an alternative. We applied Bayesian inference, model comparison, and model-averaging techniques to the inference of the cross-field density structuring in solar magnetic waveguides using information on periods and damping times for resonantly damped magnetohydrodynamic transverse kink oscillations. Three commonly employed alternative profiles were used to model the variation of the mass density across the waveguide boundary. Parameter inference enabled us to obtain information on physical quantities such as the Alfvén travel time, the density contrast, and the transverse inhomogeneity length scale. The inference results from alternative density models were compared and their differences quantified. Then, the relative plausibility of the considered models was assessed by performing model comparison. Our results indicate that the evidence in favor of any of the three models is minimal, unless the oscillations are strongly damped. In such a circumstance, the application of model-averaging techniques enables the computation of an evidence-weighted inference that takes into account the plausibility of each model in the calculation of a combined inversion for the unknown physical parameters.

  17. Experimental measurements and prediction of liquid densities for n-alkane mixtures

    International Nuclear Information System (INIS)

    Ramos-Estrada, Mariana; Iglesias-Silva, Gustavo A.; Hall, Kenneth R.

    2006-01-01

    We present experimental liquid densities for n-pentane, n-hexane and n-heptane and their binary mixtures from (273.15 to 363.15) K over the entire composition range (for the mixtures) at atmospheric pressure. A vibrating tube densimeter produces the experimental densities. Also, we present a generalized correlation to predict the liquid densities of n-alkanes and their mixtures. We have combined the principle of congruence with the Tait equation to obtain an equation that uses as variables: temperature, pressure and the equivalent carbon number of the mixture. Also, we present a generalized correlation for the atmospheric liquid densities of n-alkanes. The average absolute percentage deviation of this equation from the literature experimental density values is 0.26%. The Tait equation has an average percentage deviation of 0.15% from experimental density measurements

  18. Modelling spatial density using continuous wavelet transforms

    Indian Academy of Sciences (India)

    Space debris; wavelets; Mexican hat; Laplace distribution; random search; parameter estimation. ... Author Affiliations. D Sudheer Reddy1 N Gopal Reddy2 A K Anilkumar3. Digital Mapping and Modelling Division, Advanced Data Processing Research Institute, Secunderabad 500 009, India; Department of Mathematics, ...

  19. Modelling spatial density using continuous wavelet transforms

    Indian Academy of Sciences (India)

    A K ANILKUMAR3. 1Digital Mapping and Modelling Division, Advanced Data Processing Research .... probability of conjunction is very high and the miss distance between active satellite and debri object is less ... particularly helpful in tackling problems involving signal identification and detection of hidden transients (hard ...

  20. Density contrast indicators in cosmological dust models

    Indian Academy of Sciences (India)

    contrast, which may or may not be monotonically increasing with time. We also find that monotonic- ity seems to be related to the initial conditions of the model, which may be of potential interest in connection with debates regarding gravitational entropy and the arrow of time. 1. Introduction. An important question in ...

  1. Current Density and Continuity in Discretized Models

    Science.gov (United States)

    Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard

    2010-01-01

    Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…

  2. Evaluation of the Troxler Model 4640 Thin Lift Nuclear Density Gauge. Research report (Interim)

    International Nuclear Information System (INIS)

    Solaimanian, M.; Holmgreen, R.J.; Kennedy, T.W.

    1990-07-01

    The report describes the results of a research study to determine the effectiveness of the Troxler Model 4640 Thin Lift Nuclear Density Gauge. The densities obtained from cores and the nuclear density gauge from seven construction projects were compared. The projects were either newly constructed or under construction when the tests were performed. A linear regression technique was used to investigate how well the core densities could be predicted from nuclear densities. Correlation coefficients were determined to indicate the degree of correlation between the core and nuclear densities. Using a statistical analysis technique, the range of the mean difference between core and nuclear measurements was established for specified confidence levels for each project. Analysis of the data indicated that the accuracy of the gauge is material dependent. While relatively acceptable results were obtained with limestone mixtures, the gauge did not perform satisfactorily with mixtures containing siliceous aggregate

  3. Thermodynamic prediction of glass formation tendency, cluster-in-jellium model for metallic glasses, ab initio tight-binding calculations, and new density functional theory development for systems with strong electron correlation

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yongxin [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Solidification of liquid is a very rich and complicated field, although there is always a famous homogeneous nucleation theory in a standard physics or materials science text book. Depending on the material and processing condition, liquid may solidify to single crystalline, polycrystalline with different texture, quasi-crystalline, amorphous solid or glass (Glass is a kind of amorphous solid in general, which has short-range and medium-range order). Traditional oxide glass may easily be formed since the covalent directional bonded network is apt to be disturbed. In other words, the energy landcape of the oxide glass is so complicated that system need extremely long time to explore the whole configuration space. On the other hand, metallic liquid usually crystalize upon cooling because of the metallic bonding nature. However, Klement et.al., (1960) reported that Au-Si liquid underwent an amorphous or “glassy” phase transformation with rapid quenching. In recent two decades, bulk metallic glasses have also been found in several multicomponent alloys[Inoue et al., (2002)]. Both thermodynamic factors (e.g., free energy of various competitive phase, interfacial free energy, free energy of local clusters, etc.) and kinetic factors (e.g., long range mass transport, local atomic position rearrangement, etc.) play important roles in the metallic glass formation process. Metallic glass is fundamentally different from nanocrystalline alloys. Metallic glasses have to undergo a nucleation process upon heating in order to crystallize. Thus the short-range and medium-range order of metallic glasses have to be completely different from crystal. Hence a method to calculate the energetics of different local clusters in the undercooled liquid or glasses become important to set up a statistic model to describe metalllic glass formation. Scattering techniques like x-ray and neutron have widely been used to study the structues of metallic glasses. Meanwhile, computer simulation

  4. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  5. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  6. Osteoporosis risk prediction for bone mineral density assessment of postmenopausal women using machine learning.

    Science.gov (United States)

    Yoo, Tae Keun; Kim, Sung Kean; Kim, Deok Won; Choi, Joon Yul; Lee, Wan Hyung; Oh, Ein; Park, Eun-Cheol

    2013-11-01

    A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women compared to the ability of conventional clinical decision tools. We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Examination Surveys. The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests, artificial neural networks (ANN), and logistic regression (LR) based on simple surveys. The machine learning models were compared to four conventional clinical decision tools: osteoporosis self-assessment tool (OST), osteoporosis risk assessment instrument (ORAI), simple calculated osteoporosis risk estimation (SCORE), and osteoporosis index of risk (OSIRIS). SVM had significantly better area under the curve (AUC) of the receiver operating characteristic than ANN, LR, OST, ORAI, SCORE, and OSIRIS for the training set. SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0% at total hip, femoral neck, or lumbar spine for the testing set. The significant factors selected by SVM were age, height, weight, body mass index, duration of menopause, duration of breast feeding, estrogen therapy, hyperlipidemia, hypertension, osteoarthritis, and diabetes mellitus. Considering various predictors associated with low bone density, the machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.

  7. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  8. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    constructed from geological and hydrological data. However, geophysical data are increasingly used to inform hydrogeologic models because they are collected at lower cost and much higher density than geological and hydrological data. Despite increased use of geophysics, it is still unclear whether...... the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... collecting geophysical data. At a minimum, an analysis should be conducted assuming settings that are favorable for the chosen geophysical method. If the analysis suggests that data collected by the geophysical method is unlikely to improve model prediction performance under these favorable settings...

  9. Spatially explicit modeling of lesser prairie-chicken lek density in Texas

    Science.gov (United States)

    Timmer, Jennifer M.; Butler, M.J.; Ballard, Warren; Boal, Clint W.; Whitlaw, Heather A.

    2014-01-01

    As with many other grassland birds, lesser prairie-chickens (Tympanuchus pallidicinctus) have experienced population declines in the Southern Great Plains. Currently they are proposed for federal protection under the Endangered Species Act. In addition to a history of land-uses that have resulted in habitat loss, lesser prairie-chickens now face a new potential disturbance from energy development. We estimated lek density in the occupied lesser prairie-chicken range of Texas, USA, and modeled anthropogenic and vegetative landscape features associated with lek density. We used an aerial line-transect survey method to count lesser prairie-chicken leks in spring 2010 and 2011 and surveyed 208 randomly selected 51.84-km(2) blocks. We divided each survey block into 12.96-km(2) quadrats and summarized landscape variables within each quadrat. We then used hierarchical distance-sampling models to examine the relationship between lek density and anthropogenic and vegetative landscape features and predict how lek density may change in response to changes on the landscape, such as an increase in energy development. Our best models indicated lek density was related to percent grassland, region (i.e., the northeast or southwest region of the Texas Panhandle), total percentage of grassland and shrubland, paved road density, and active oil and gas well density. Predicted lek density peaked at 0.39leks/12.96km(2) (SE=0.09) and 2.05leks/12.96km(2) (SE=0.56) in the northeast and southwest region of the Texas Panhandle, respectively, which corresponds to approximately 88% and 44% grassland in the northeast and southwest region. Lek density increased with an increase in total percentage of grassland and shrubland and was greatest in areas with lower densities of paved roads and lower densities of active oil and gas wells. We used the 2 most competitive models to predict lek abundance and estimated 236 leks (CV=0.138, 95% CI=177-306leks) for our sampling area. Our results suggest that

  10. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  11. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    method based on an accurate description of the fluid mass distribution is presented. The characterization procedure accurately matches the fluid saturation pressure. Additionally, a Peneloux volume translation scheme, capable of accurately reproducing the fluid density above and below the saturation...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection.......In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...

  12. Social Inclusion Predicts Lower Blood Glucose and Low-Density Lipoproteins in Healthy Adults.

    Science.gov (United States)

    Floyd, Kory; Veksler, Alice E; McEwan, Bree; Hesse, Colin; Boren, Justin P; Dinsmore, Dana R; Pavlich, Corey A

    2017-08-01

    Loneliness has been shown to have direct effects on one's personal well-being. Specifically, a greater feeling of loneliness is associated with negative mental health outcomes, negative health behaviors, and an increased likelihood of premature mortality. Using the neuroendocrine hypothesis, we expected social inclusion to predict decreases in both blood glucose levels and low-density lipoproteins (LDLs) and increases in high-density lipoproteins (HDLs). Fifty-two healthy adults provided self-report data for social inclusion and blood samples for hematological tests. Results indicated that higher social inclusion predicted lower levels of blood glucose and LDL, but had no effect on HDL. Implications for theory and practice are discussed.

  13. Chemical theory and modelling through density across length scales

    International Nuclear Information System (INIS)

    Ghosh, Swapan K.

    2016-01-01

    One of the concepts that has played a major role in the conceptual as well as computational developments covering all the length scales of interest in a number of areas of chemistry, physics, chemical engineering and materials science is the concept of single-particle density. Density functional theory has been a versatile tool for the description of many-particle systems across length scales. Thus, in the microscopic length scale, an electron density based description has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. Density concept has been used in the form of single particle number density in the intermediate mesoscopic length scale to obtain an appropriate picture of the equilibrium and dynamical processes, dealing with a wide class of problems involving interfacial science and soft condensed matter. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related property density functions has been found to be quite appropriate. The basic ideas underlying the versatile uses of the concept of density in the theory and modelling of materials and phenomena, as visualized across length scales, along with selected illustrative applications to some recent areas of research on hydrogen energy, soft matter, nucleation phenomena, isotope separation, and separation of mixture in condensed phase, will form the subject matter of the talk. (author)

  14. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  15. Viscosity and Liquid Density of Asymmetric n-Alkane Mixtures: Measurement and Modelling

    DEFF Research Database (Denmark)

    Queimada, António J.; Marrucho, Isabel M.; Coutinho, João A.P.

    2005-01-01

    Viscosity and liquid density Measurements were performed, at atmospheric pressure. in pure and mixed n-decane. n-eicosane, n-docosane, and n-tetracosane from 293.15 K (or above the melting point) up to 343.15 K. The viscosity was determined with a rolling ball viscometer and liquid densities...... with a vibrating U-tube densimeter. Pure component results agreed, oil average, with literature values within 0.2% for liquid density and 3% for viscosity. The measured data were used to evaluate the performance of two models for their predictions: the friction theory coupled with the Peng-Robinson equation...... of state and a corresponding states model recently proposed for surface tension, viscosity, vapor pressure, and liquid densities of the series of n-alkanes. Advantages and shortcoming of these models are discussed....

  16. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  17. Models and tests of optimal density and maximal yield for crop plants.

    Science.gov (United States)

    Deng, Jianming; Ran, Jinzhi; Wang, Zhiqiang; Fan, Zhexuan; Wang, Genxuan; Ji, Mingfei; Liu, Jing; Wang, Yun; Liu, Jianquan; Brown, James H

    2012-09-25

    We introduce a theoretical framework that predicts the optimum planting density and maximal yield for an annual crop plant. Two critical parameters determine the trajectory of plant growth and the optimal density, N(opt), where canopies of growing plants just come into contact, and competition: (i) maximal size at maturity, M(max), which differs among varieties due to artificial selection for different usable products; and (ii) intrinsic growth rate, g, which may vary with variety and environmental conditions. The model predicts (i) when planting density is less than N(opt), all plants of a crop mature at the same maximal size, M(max), and biomass yield per area increases linearly with density; and (ii) when planting density is greater than N(opt), size at maturity and yield decrease with -4/3 and -1/3 powers of density, respectively. Field data from China show that most annual crops, regardless of variety and life form, exhibit similar scaling relations, with maximal size at maturity, M(max), accounting for most of the variation in optimal density, maximal yield, and energy use per area. Crops provide elegantly simple empirical model systems to study basic processes that determine the performance of plants in agricultural and less managed ecosystems.

  18. Acoustic Velocity and Attenuation in Magnetorhelogical fluids based on an effective density fluid model

    Directory of Open Access Journals (Sweden)

    Shen Min

    2016-01-01

    Full Text Available Magnetrohelogical fluids (MRFs represent a class of smart materials whose rheological properties change in response to the magnetic field, which resulting in the drastic change of the acoustic impedance. This paper presents an acoustic propagation model that approximates a fluid-saturated porous medium as a fluid with a bulk modulus and effective density (EDFM to study the acoustic propagation in the MRF materials under magnetic field. The effective density fluid model derived from the Biot’s theory. Some minor changes to the theory had to be applied, modeling both fluid-like and solid-like state of the MRF material. The attenuation and velocity variation of the MRF are numerical calculated. The calculated results show that for the MRF material the attenuation and velocity predicted with this effective density fluid model are close agreement with the previous predictions by Biot’s theory. We demonstrate that for the MRF material acoustic prediction the effective density fluid model is an accurate alternative to full Biot’s theory and is much simpler to implement.

  19. Low bone mineral density in noncholestatic liver cirrhosis: prevalence, severity and prediction

    Directory of Open Access Journals (Sweden)

    Figueiredo Fátima Aparecida Ferreira

    2003-01-01

    Full Text Available BACKGROUND: Metabolic bone disease has long been associated with cholestatic disorders. However, data in noncholestatic cirrhosis are relatively scant. AIMS: To determine prevalence and severity of low bone mineral density in noncholestatic cirrhosis and to investigate whether age, gender, etiology, severity of underlying liver disease, and/or laboratory tests are predictive of the diagnosis. PATIENTS/METHODS: Between March and September/1998, 89 patients with noncholestatic cirrhosis and 20 healthy controls were enrolled in a cross-sectional study. All subjects underwent standard laboratory tests and bone densitometry at lumbar spine and femoral neck by dual X-ray absorptiometry. RESULTS: Bone mass was significantly reduced at both sites in patients compared to controls. The prevalence of low bone mineral density in noncholestatic cirrhosis, defined by the World Health Organization criteria, was 78% at lumbar spine and 71% at femoral neck. Bone density significantly decreased with age at both sites, especially in patients older than 50 years. Bone density was significantly lower in post-menopausal women patients compared to pre-menopausal and men at both sites. There was no significant difference in bone mineral density among noncholestatic etiologies. Lumbar spine bone density significantly decreased with the progression of liver dysfunction. No biochemical variable was significantly associated with low bone mineral density. CONCLUSIONS: Low bone mineral density is highly prevalent in patients with noncholestatic cirrhosis. Older patients, post-menopausal women and patients with severe hepatic dysfunction experienced more advanced bone disease. The laboratory tests routinely determined in patients with liver disease did not reliably predict low bone mineral density.

  20. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  1. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  2. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  3. Classical density functional theory & simulations on a coarse-grained model of aromatic ionic liquids.

    Science.gov (United States)

    Turesson, Martin; Szparaga, Ryan; Ma, Ke; Woodward, Clifford E; Forsman, Jan

    2014-05-14

    A new classical density functional approach is developed to accurately treat a coarse-grained model of room temperature aromatic ionic liquids. Our major innovation is the introduction of charge-charge correlations, which are treated in a simple phenomenological way. We test this theory on a generic coarse-grained model for aromatic RTILs with oligomeric forms for both cations and anions, approximating 1-alkyl-3-methyl imidazoliums and BF₄⁻, respectively. We find that predictions by the new density functional theory for fluid structures at charged surfaces are very accurate, as compared with molecular dynamics simulations, across a range of surface charge densities and lengths of the alkyl chain. Predictions of interactions between charged surfaces are also presented.

  4. Predicting soil particle density from clay and soil organic matter contents

    DEFF Research Database (Denmark)

    Schjønning, Per; McBride, R.A.; Keller, T.

    2017-01-01

    Soil particle density (Dp) is an important soil property for calculating soil porosity expressions. However, many studies assume a constant value, typically 2.65Mgm−3 for arable, mineral soils. Fewmodels exist for the prediction of Dp from soil organic matter (SOM) content. We hypothesized that b...

  5. Predictive densities for day-ahead electricity prices using time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre; Madsen, Henrik

    2014-01-01

    A large part of the decision-making problems actors of the power system are facing on a daily basis requires scenarios for day-ahead electricity market prices. These scenarios are most likely to be generated based on marginal predictive densities for such prices, then enhanced with a temporal...

  6. Strange matter equation of state in the quark mass-density-dependent model

    International Nuclear Information System (INIS)

    Benvenuto, O.G.; Lugones, G.

    1995-01-01

    We study the properties and stability of strange matter at T=0 in the quark mass-density-dependent model for noninteracting quarks. We found a wide ''stability window'' for the values of the parameters (C,M s0 ) and the resulting equation of state at low densities is stiffer than that of the MIT bag model. At high densities it tends to the ultrarelativistic behavior expected because of the asymptotic freedom of quarks. The density of zero pressure is near the one predicted by the bag model and not shifted away as stated before; nevertheless, at these densities the velocity of sound is ∼50% larger in this model than in the bag model. We have integrated the equations of stellar structure for strange stars with the present equation of state. We found that the mass-radius relation is very much the same as in the bag model, although it extends to more massive objects, due to the stiffening of the equation of state at low densities

  7. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  8. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  9. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  10. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  11. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  12. Speed-Density Model of Interrupted Traffic Flow Based on Coil Data

    Directory of Open Access Journals (Sweden)

    Chen Yu

    2016-01-01

    Full Text Available As a fundamental traffic diagram, the speed-density relationship can provide a solid foundation for traffic flow analysis and efficient traffic management. Because of the change in modern travel modes, the dramatic increase in the number of vehicles and traffic density, and the impact of traffic signals and other factors, vehicles change velocity frequently, which means that a speed-density model based on uninterrupted traffic flow is not suitable for interrupted traffic flow. Based on the coil data of urban roads in Wuhan, China, a new method which can accurately describe the speed-density relation of interrupted traffic flow is proposed for speed fluctuation characteristics. The model of upper and lower bounds of critical values obtained by fitting the data of the coils on urban roads can accurately and intuitively describe the state of urban road traffic, and the physical meaning of each parameter plays an important role in the prediction and analysis of such traffic.

  13. Comparison of Mars Atmospheric Density Estimates from Models to Measurements from Mars Global Surveyor (MGS) Data

    Science.gov (United States)

    Justh, Hilary L.; Justus, C. G.

    2009-01-01

    A recent study (Desai, 2008) has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers (Spirit and Opportunity) and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing Desai's reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon entry, descent and landing. Desai's results have raised a question as to whether there is a systemic problem within Mars atmospheric models. Proposal is to compare Mars atmospheric density estimates from Mars atmospheric models to measurements made by Mars Global Surveyor (MGS). Comparison study requires the completion of several tasks that would result in a greater understanding of reasons behind the discrepancy found during recent landings on Mars and possible solutions to this problem.

  14. Charge and transition densities of samarium isotopes in the interacting Boson model

    International Nuclear Information System (INIS)

    Moinester, M.A.; Alster, J.; Dieperink, A.E.L.

    1982-01-01

    The interacting boson approximation (IBA) model has been used to interpret the ground-state charge distributions and lowest 2 + transition charge densities of the even samarium isotopes for A = 144-154. Phenomenological boson transition densities associated with the nucleons comprising the s-and d-bosons of the IBA were determined via a least squares fit analysis of charge and transition densities in the Sm isotopes. The application of these boson trasition densities to higher excited 0 + and 2 + states of Sm, and to 0 + and 2 + transitions in neighboring nuclei, such as Nd and Gd, is described. IBA predictions for the transition densities of the three lowest 2 + levels of 154 Gd are given and compared to theoretical transition densities based on Hartree-Fock calculations. The deduced quadrupole boson transition densities are in fair agreement with densities derived previously from 150 Nd data. It is also shown how certain moments of the best fit boson transition densities can simply and sucessfully describe rms radii, isomer shifts, B(E2) strengths, and transition radii for the Sm isotopes. (orig.)

  15. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  16. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  17. Modeling relaxation length and density of acacia mangium wood using gamma - ray attenuation technique

    International Nuclear Information System (INIS)

    Tamer A Tabet; Fauziah Abdul Aziz

    2009-01-01

    Wood density measurement is related to the several factors that influence wood quality. In this paper, density, relaxation length and half-thickness value of eight ages, 3, 5, 7, 10, 11, 13 and 15 year-old of Acacia mangium wood were determined using gamma radiation from 137 Cs source. Results show that Acacia mangium tree of age 3 year has the highest relaxation length of 83.33 cm and least density of 0.43 gcm -3 , while the tree of age 15 year has the least Relaxation length of 28.56 cm and highest density of 0.76 gcm -3 . Results also show that the 3 year-old Acacia mangium wood has the highest half thickness value of 57.75 cm and 15 year-old tree has the least half thickness value of 19.85 cm. Two mathematical models have been developed for the prediction of density, variation with relaxation length and half-thickness value of different age of tree. A good agreement (greater than 85% in most cases) was observed between the measured values and predicted ones. Very good linear correlation was found between measured density and the age of tree (R2 = 0.824), and between estimated density and Acacia mangium tree age (R2 = 0.952). (Author)

  18. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  19. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  20. Modeling of Fluctuating Mass Flux in Variable Density Flows

    Science.gov (United States)

    So, R. M. C.; Mongia, H. C.; Nikjooy, M.

    1983-01-01

    The approach solves for both Reynolds and Favre averaged quantities and calculates the scalar pdf. Turbulent models used to close the governing equations are formulated to account for complex mixing and variable density effects. In addition, turbulent mass diffusivities are not assumed to be in constant proportion to turbulent momentum diffusivities. The governing equations are solved by a combination of finite-difference technique and Monte-Carlo simulation. Some preliminary results on simple variable density shear flows are presented. The differences between these results and those obtained using conventional models are discussed.

  1. An empirical topside electron density model for calculation of absolute ion densities in IRI

    Czech Academy of Sciences Publication Activity Database

    Třísková, Ludmila; Truhlík, Vladimír; Šmilauer, Jan

    2006-01-01

    Roč. 37, č. 5 (2006), s. 928-934 ISSN 0273-1177 R&D Projects: GA ČR GP205/02/P037; GA AV ČR IAA3042201; GA MŠk ME 651 Grant - others:National Science Foundation(US) 0245457 Institutional research plan: CEZ:AV0Z30420517 Keywords : Plasma density * Topside ionosphere * Ion composition * Empirical models Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.706, year: 2005

  2. Void fraction prediction in two-phase flows independent of the liquid phase density changes

    International Nuclear Information System (INIS)

    Nazemi, E.; Feghhi, S.A.H.; Roshani, G.H.

    2014-01-01

    Gamma-ray densitometry is a frequently used non-invasive method to determine void fraction in two-phase gas liquid pipe flows. Performance of flow meters using gamma-ray attenuation depends strongly on the fluid properties. Variations of the fluid properties such as density in situations where temperature and pressure fluctuate would cause significant errors in determination of the void fraction in two-phase flows. A conventional solution overcoming such an obstacle is periodical recalibration which is a difficult task. This paper presents a method based on dual modality densitometry using Artificial Neural Network (ANN), which offers the advantage of measuring the void fraction independent of the liquid phase changes. An experimental setup was implemented to generate the required input data for training the network. ANNs were trained on the registered counts of the transmission and scattering detectors in different liquid phase densities and void fractions. Void fractions were predicted by ANNs with mean relative error of less than 0.45% in density variations range of 0.735 up to 0.98 gcm −3 . Applying this method would improve the performance of two-phase flow meters and eliminates the necessity of periodical recalibration. - Highlights: • Void fraction was predicted independent of density changes. • Recorded counts of detectors/void fraction were used as inputs/output of ANN. • ANN eliminated necessity of recalibration in changeable density of two-phase flows

  3. A density functional theory based approach for predicting melting points of ionic liquids.

    Science.gov (United States)

    Chen, Lihua; Bryantsev, Vyacheslav S

    2017-02-01

    Accurate prediction of melting points of ILs is important both from the fundamental point of view and from the practical perspective for screening ILs with low melting points and broadening their utilization in a wider temperature range. In this work, we present an ab initio approach to calculate melting points of ILs with known crystal structures and illustrate its application for a series of 11 ILs containing imidazolium/pyrrolidinium cations and halide/polyatomic fluoro-containing anions. The melting point is determined as a temperature at which the Gibbs free energy of fusion is zero. The Gibbs free energy of fusion can be expressed through the use of the Born-Fajans-Haber cycle via the lattice free energy of forming a solid IL from gaseous phase ions and the sum of the solvation free energies of ions comprising IL. Dispersion-corrected density functional theory (DFT) involving (semi)local (PBE-D3) and hybrid exchange-correlation (HSE06-D3) functionals is applied to estimate the lattice enthalpy, entropy, and free energy. The ions solvation free energies are calculated with the SMD-generic-IL solvation model at the M06-2X/6-31+G(d) level of theory under standard conditions. The melting points of ILs computed with the HSE06-D3 functional are in good agreement with the experimental data, with a mean absolute error of 30.5 K and a mean relative error of 8.5%. The model is capable of accurately reproducing the trends in melting points upon variation of alkyl substituents in organic cations and replacement one anion by another. The results verify that the lattice energies of ILs containing polyatomic fluoro-containing anions can be approximated reasonably well using the volume-based thermodynamic approach. However, there is no correlation of the computed lattice energies with molecular volume for ILs containing halide anions. Moreover, entropies of solid ILs follow two different linear relationships with molecular volume for halides and polyatomic fluoro

  4. Radiomic modeling of BI-RADS density categories

    Science.gov (United States)

    Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir

    2017-03-01

    Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.

  5. Canopy Chlorophyll Density Based Index for Estimating Nitrogen Status and Predicting Grain Yield in Rice

    Directory of Open Access Journals (Sweden)

    Xiaojun Liu

    2017-10-01

    Full Text Available Canopy chlorophyll density (Chl has a pivotal role in diagnosing crop growth and nutrition status. The purpose of this study was to develop Chl based models for estimating N status and predicting grain yield of rice (Oryza sativa L. with Leaf area index (LAI and Chlorophyll concentration of the upper leaves. Six field experiments were conducted in Jiangsu Province of East China during 2007, 2008, 2009, 2013, and 2014. Different N rates were applied to generate contrasting conditions of N availability in six Japonica cultivars (9915, 27123, Wuxiangjing 14, Wuyunjing 19, Yongyou 8, and Wuyunjing 24 and two Indica cultivars (Liangyoupei 9, YLiangyou 1. The SPAD values of the four uppermost leaves and LAI were measured from tillering to flowering growth stages. Two N indicators, leaf N accumulation (LNA and plant N accumulation (PNA were measured. The LAI estimated by LAI-2000 and LI-3050C were compared and calibrated with a conversion equation. A linear regression analysis showed significant relationships between Chl value and N indicators, the equations were as follows: PNA = (0.092 × Chl − 1.179 (R2 = 0.94, P < 0.001, relative root mean square error (RRMSE = 0.196, LNA = (0.052 × Chl − 0.269 (R2 = 0.93, P < 0.001, RRMSE = 0.185. Standardized method was used to quantity the correlation between Chl value and grain yield, normalized yield = (0.601 × normalized Chl + 0.400 (R2 = 0.81, P < 0.001, RRMSE = 0.078. Independent experimental data also validated the use of Chl value to accurately estimate rice N status and predict grain yield.

  6. Assessment of Nucleation Site Density Models for CFD Simulations of Subcooled Flow Boiling

    International Nuclear Information System (INIS)

    Hoang, N. H.; Chu, I. C.; Euh, D. J.; Song, C. H.

    2015-01-01

    The framework of a CFD simulation of subcooled flow boiling basically includes a block of wall boiling models communicating with governing equations of a two-phase flow via parameters like temperature, rate of phasic change, etc. In the block of wall boiling models, a heat flux partitioning model, which describes how the heat is taken away from a heated surface, is combined with models quantifying boiling parameters, i.e. nucleation site density, and bubble departure diameter and frequency. It is realized that the nucleation site density is an important parameter for predicting the subcooled flow boiling. The number of nucleation sites per unit area decides the influence region of each heat transfer mechanism. The variation of the nucleation site density will mutually change the dynamics of vapor bubbles formed at these sites. In addition, the nucleation site density is needed as one initial and boundary condition to solve the interfacial area transport equation. A lot of effort has been devoted to mathematically formulate the nucleation site density. As a consequence, numerous correlations of the nucleation site density are available in the literature. These correlations are commonly quite different in their mathematical form as well as application range. Some correlations of the nucleation site density have been applied successfully to CFD simulations of several specific subcooled boiling flows, but in combination with different correlations of the bubble departure diameter and frequency. In addition, the values of the nucleation site density, and bubble departure diameter and frequency obtained from simulations for a same problem are relatively different, depending on which models are used, even when global characteristics, e.g., void fraction and mean bubble diameter, agree well with experimental values. It is realized that having a good CFD simulations of the subcooled flow boiling requires a detailed validations of all the models used. Owing to the importance

  7. Ionospheric topside models compared with experimental electron density profiles

    Directory of Open Access Journals (Sweden)

    S. M. Radicella

    2005-06-01

    Full Text Available Recently an increasing number of topside electron density profiles has been made available to the scientific community on the Internet. These data are important for ionospheric modeling purposes, since the experimental information on the electron density above the ionosphere maximum of ionization is very scarce. The present work compares NeQuick and IRI models with the topside electron density profiles available in the databases of the ISIS2, IK19 and Cosmos 1809 satellites. Experimental electron content from the F2 peak up to satellite height and electron densities at fixed heights above the peak have been compared under a wide range of different conditions. The analysis performed points out the behavior of the models and the improvements needed to be assessed to have a better reproduction of the experimental results. NeQuick topside is a modified Epstein layer, with thickness parameter determined by an empirical relation. It appears that its performance is strongly affected by this parameter, indicating the need for improvements of its formulation. IRI topside is based on Booker's approach to consider two parts with constant height gradients. It appears that this formulation leads to an overestimation of the electron density in the upper part of the profiles, and overestimation of TEC.

  8. Platelet density per monocyte predicts adverse events in patients after percutaneous coronary intervention.

    Science.gov (United States)

    Rutten, Bert; Roest, Mark; McClellan, Elizabeth A; Sels, Jan W; Stubbs, Andrew; Jukema, J Wouter; Doevendans, Pieter A; Waltenberger, Johannes; van Zonneveld, Anton-Jan; Pasterkamp, Gerard; De Groot, Philip G; Hoefer, Imo E

    2016-01-01

    Monocyte recruitment to damaged endothelium is enhanced by platelet binding to monocytes and contributes to vascular repair. Therefore, we studied whether the number of platelets per monocyte affects the recurrence of adverse events in patients after percutaneous coronary intervention (PCI). Platelet-monocytes complexes with high and low median fluorescence intensities (MFI) of the platelet marker CD42b were isolated using cell sorting. Microscopic analysis revealed that a high platelet marker MFI on monocytes corresponded with a high platelet density per monocyte while a low platelet marker MFI corresponded with a low platelet density per monocyte (3.4 ± 0.7 vs 1.4 ± 0.1 platelets per monocyte, P=0.01). Using real-time video microscopy, we observed increased recruitment of high platelet density monocytes to endothelial cells as compared with low platelet density monocytes (P=0.01). Next, we classified PCI scheduled patients (N=263) into groups with high, medium and low platelet densities per monocyte and assessed the recurrence of adverse events. After multivariate adjustment for potential confounders, we observed a 2.5-fold reduction in the recurrence of adverse events in patients with a high platelet density per monocyte as compared with a low platelet density per monocyte [hazard ratio=0.4 (95% confidence interval, 0.2-0.8), P=0.01]. We show that a high platelet density per monocyte increases monocyte recruitment to endothelial cells and predicts a reduction in the recurrence of adverse events in patients after PCI. These findings may imply that a high platelet density per monocyte protects against recurrence of adverse events.

  9. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  10. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  11. Re-examining Prostate-specific Antigen (PSA) Density: Defining the Optimal PSA Range and Patients for Using PSA Density to Predict Prostate Cancer Using Extended Template Biopsy.

    Science.gov (United States)

    Jue, Joshua S; Barboza, Marcelo Panizzutti; Prakash, Nachiketh S; Venkatramani, Vivek; Sinha, Varsha R; Pavan, Nicola; Nahar, Bruno; Kanabur, Pratik; Ahdoot, Michael; Dong, Yan; Satyanarayana, Ramgopal; Parekh, Dipen J; Punnen, Sanoj

    2017-07-01

    To compare the predictive accuracy of prostate-specific antigen (PSA) density vs PSA across different PSA ranges and by prior biopsy status in a prospective cohort undergoing prostate biopsy. Men from a prospective trial underwent an extended template biopsy to evaluate for prostate cancer at 26 sites throughout the United States. The area under the receiver operating curve assessed the predictive accuracy of PSA density vs PSA across 3 PSA ranges (10 ng/mL). We also investigated the effect of varying the PSA density cutoffs on the detection of cancer and assessed the performance of PSA density vs PSA in men with or without a prior negative biopsy. Among 1290 patients, 585 (45%) and 284 (22%) men had prostate cancer and significant prostate cancer, respectively. PSA density performed better than PSA in detecting any prostate cancer within a PSA of 4-10 ng/mL (area under the receiver operating characteristic curve [AUC]: 0.70 vs 0.53, P PSA >10 mg/mL (AUC: 0.84 vs 0.65, P PSA density was significantly more predictive than PSA in detecting any prostate cancer in men without (AUC: 0.73 vs 0.67, P PSA increases, PSA density becomes a better marker for predicting prostate cancer compared with PSA alone. Additionally, PSA density performed better than PSA in men with a prior negative biopsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  13. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  14. First-Principles Prediction of Densities of Amorphous Materials: The Case of Amorphous Silicon

    Science.gov (United States)

    Furukawa, Yoritaka; Matsushita, Yu-ichiro

    2018-02-01

    A novel approach to predict the atomic densities of amorphous materials is explored on the basis of Car-Parrinello molecular dynamics (CPMD) in density functional theory. Despite the determination of the atomic density of matter being crucial in understanding its physical properties, no first-principles method has ever been proposed for amorphous materials until now. We have extended the conventional method for crystalline materials in a natural manner and pointed out the importance of the canonical ensemble of the total energy in the determination of the atomic densities of amorphous materials. To take into account the canonical distribution of the total energy, we generate multiple amorphous structures with several different volumes by CPMD simulations and average the total energies at each volume. The density is then determined as the one that minimizes the averaged total energy. In this study, this approach is implemented for amorphous silicon (a-Si) to demonstrate its validity, and we have determined the density of a-Si to be 4.1% lower and its bulk modulus to be 28 GPa smaller than those of the crystal, which are in good agreement with experiments. We have also confirmed that generating samples through classical molecular dynamics simulations produces a comparable result. The findings suggest that the presented method is applicable to other amorphous systems, including those for which experimental knowledge is lacking.

  15. A Creep Model for High-Density Snow

    Science.gov (United States)

    2017-04-01

    plotted as an additional parameter. From these data , I extracted the trend in Y vs. ρ and provide this as a look-up table in ABAQUS (Table 2). Table 2...could be used in the standard ABAQUS creep model. Comparing model-predicted strain and settlement to published data shows that the secondary creep...Available laboratory data helped to determine the parameters for these models. These models were recast into a form compatible with the ABAQUS finite

  16. Absolute densities in exoplanetary systems. Photodynamical modelling of Kepler-138.

    Science.gov (United States)

    Almenara, J. M.; Díaz, R. F.; Dorn, C.; Bonfils, X.; Udry, S.

    2018-04-01

    In favourable conditions, the density of transiting planets in multiple systems can be determined from photometry data alone. Dynamical information can be extracted from light curves, providing modelling is done self-consistently, i.e. using a photodynamical model, which simulates the individual photometric observations instead of the more generally used transit times. We apply this methodology to the Kepler-138 planetary system. The derived planetary bulk densities are a factor of two more precise than previous determinations, and we find a discrepancy in the stellar bulk density with respect to a previous study. This leads, in turn, to a discrepancy in the determination of masses and radii of the star and the planets. In particular, we find that interior planet, Kepler-138 b, has a size in between Mars and the Earth. Given our mass and density estimates, we characterize the planetary interiors using a generalized Bayesian inference model. This model allows us to quantify for interior degeneracy and calculate confidence regions of interior parameters such as thicknesses of the core, the mantle, and ocean and gas layers. We find that Kepler-138 b and Kepler-138 d have significantly thick volatile layers, and that the gas layer of Kepler-138 b is likely enriched. On the other hand, Kepler-138 c can be purely rocky.

  17. Probability density estimation in stochastic environmental models using reverse representations

    NARCIS (Netherlands)

    Van den Berg, E.; Heemink, A.W.; Lin, H.X.; Schoenmakers, J.G.M.

    2003-01-01

    The estimation of probability densities of variables described by systems of stochastic dierential equations has long been done using forward time estimators, which rely on the generation of realizations of the model, forward in time. Recently, an estimator based on the combination of forward and

  18. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  19. Regional 4-D modeling of the ionospheric electron density

    Science.gov (United States)

    Schmidt, M.; Bilitza, D.; Shum, C. K.; Zeilhofer, C.

    2008-08-01

    The knowledge of the electron density is the key point in correcting ionospheric delays of electromagnetic measurements and in studying the ionosphere. During the last decade GNSS, in particular GPS, has become a promising tool for monitoring the total electron content (TEC), i.e., the integral of the electron density along the ray-path between the transmitting satellite and the receiver. Hence, geometry-free GNSS measurements provide informations on the electron density, which is basically a four-dimensional function depending on spatial position and time. In addition, these GNSS measurements can be combined with other available data including nadir, over-ocean TEC observations from dual-frequency radar altimetry (T/P, JASON, ENVISAT), and TECs from GPS-LEO occultation systems (e.g., FORMOSAT-3/COSMIC, CHAMP) with heterogeneous sampling and accuracy. In this paper, we present different multi-dimensional approaches for modeling spatio-temporal variations of the ionospheric electron density. To be more specific, we split the target function into a reference part, computed from the International Reference Ionosphere (IRI), and an unknown correction term. Due to the localizing feature of B-spline functions we apply tensor-product spline expansions to model the correction term in a certain multi-dimensional region either completely or partly. Furthermore, the multi-resolution representation derived from wavelet analysis allows monitoring the ionosphere at different resolutions levels. For demonstration we apply three approaches to electron density data over South America.

  20. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  12. Probability density based gradient projection method for inverse kinematics of a robotic human body model.

    Science.gov (United States)

    Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv

    2012-01-01

    This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.

  13. Ground-State Gas-Phase Structures of Inorganic Molecules Predicted by Density Functional Theory Methods

    KAUST Repository

    Minenkov, Yury

    2017-11-29

    We tested a battery of density functional theory (DFT) methods ranging from generalized gradient approximation (GGA) via meta-GGA to hybrid meta-GGA schemes as well as Møller–Plesset perturbation theory of the second order and a single and double excitation coupled-cluster (CCSD) theory for their ability to reproduce accurate gas-phase structures of di- and triatomic molecules derived from microwave spectroscopy. We obtained the most accurate molecular structures using the hybrid and hybrid meta-GGA approximations with B3PW91, APF, TPSSh, mPW1PW91, PBE0, mPW1PBE, B972, and B98 functionals, resulting in lowest errors. We recommend using these methods to predict accurate three-dimensional structures of inorganic molecules when intramolecular dispersion interactions play an insignificant role. The structures that the CCSD method predicts are of similar quality although at considerably larger computational cost. The structures that GGA and meta-GGA schemes predict are less accurate with the largest absolute errors detected with BLYP and M11-L, suggesting that these methods should not be used if accurate three-dimensional molecular structures are required. Because of numerical problems related to the integration of the exchange–correlation part of the functional and large scattering of errors, most of the Minnesota models tested, particularly MN12-L, M11, M06-L, SOGGA11, and VSXC, are also not recommended for geometry optimization. When maintaining a low computational budget is essential, the nonseparable gradient functional N12 might work within an acceptable range of error. As expected, the DFT-D3 dispersion correction had a negligible effect on the internuclear distances when combined with the functionals tested on nonweakly bonded di- and triatomic inorganic molecules. By contrast, the dispersion correction for the APF-D functional has been found to shorten the bonds significantly, up to 0.064 Å (AgI), in Ag halides, BaO, BaS, BaF, BaCl, Cu halides, and Li and

  14. A local leaky-box model for the local stellar surface density-gas surface density-gas phase metallicity relation

    Science.gov (United States)

    Zhu, Guangtun Ben; Barrera-Ballesteros, Jorge K.; Heckman, Timothy M.; Zakamska, Nadia L.; Sánchez, Sebastian F.; Yan, Renbin; Brinkmann, Jonathan

    2017-07-01

    We revisit the relation between the stellar surface density, the gas surface density and the gas-phase metallicity of typical disc galaxies in the local Universe with the SDSS-IV/MaNGA survey, using the star formation rate surface density as an indicator for the gas surface density. We show that these three local parameters form a tight relationship, confirming previous works (e.g. by the PINGS and CALIFA surveys), but with a larger sample. We present a new local leaky-box model, assuming star-formation history and chemical evolution is localized except for outflowing materials. We derive closed-form solutions for the evolution of stellar surface density, gas surface density and gas-phase metallicity, and show that these parameters form a tight relation independent of initial gas density and time. We show that, with canonical values of model parameters, this predicted relation match the observed one well. In addition, we briefly describe a pathway to improving the current semi-analytic models of galaxy formation by incorporating the local leaky-box model in the cosmological context, which can potentially explain simultaneously multiple properties of Milky Way-type disc galaxies, such as the size growth and the global stellar mass-gas metallicity relation.

  15. Exploring the Role of the Spatial Characteristics of Visible and Near-Infrared Reflectance in Predicting Soil Organic Carbon Density

    Directory of Open Access Journals (Sweden)

    Long Guo

    2017-10-01

    Full Text Available Soil organic carbon stock plays a key role in the global carbon cycle and the precision agriculture. Visible and near-infrared reflectance spectroscopy (VNIRS can directly reflect the internal physical construction and chemical substances of soil. The partial least squares regression (PLSR is a classical and highly commonly used model in constructing soil spectral models and predicting soil properties. Nevertheless, using PLSR alone may not consider soil as characterized by strong spatial heterogeneity and dependence. However, considering the spatial characteristics of soil can offer valuable spatial information to guarantee the prediction accuracy of soil spectral models. Thus, this study aims to construct a rapid and accurate soil spectral model in predicting soil organic carbon density (SOCD with the aid of the spatial autocorrelation of soil spectral reflectance. A total of 231 topsoil samples (0–30 cm were collected from the Jianghan Plain, Wuhan, China. The spectral reflectance (350–2500 nm was used as auxiliary variable. A geographically-weighted regression (GWR model was used to evaluate the potential improvement of SOCD prediction when the spatial information of the spectral features was considered. Results showed that: (1 The principal components extracted from PLSR have a strong relationship with the regression coefficients at the average sampling distance (300 m based on the Moran’s I values. (2 The eigenvectors of the principal components exhibited strong relationships with the absorption spectral features, and the regression coefficients of GWR varied with the geographical locations. (3 GWR displayed a higher accuracy than that of PLSR in predicting the SOCD by VNIRS. This study aimed to help people realize the importance of the spatial characteristics of soil properties and their spectra. This work also introduced guidelines for the application of GWR in predicting soil properties by VNIRS.

  16. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  17. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values.

    Science.gov (United States)

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were generated. Based on the consensus of the two observers, 15.6% of sites were of low bone density, 47.9% were of intermediate density, and 36.5% were of high density. Receiver-operating characteristic analysis showed that CBCT intensity values had a high predictive power for predicting high density sites (area under the curve [AUC] =0.94, P < 0.005) and intermediate density sites (AUC = 0.81, P < 0.005). The best cut-off value for intensity to predict intermediate density sites was 218 (sensitivity = 0.77 and specificity = 0.76) and the best cut-off value for intensity to predict high density sites was 403 (sensitivity = 0.93 and specificity = 0.77). CBCT intensity values are considered useful for predicting bone density at posterior mandibular implant sites.

  18. Single crystal plasticity by modeling dislocation density rate behavior

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Benjamin L [Los Alamos National Laboratory; Bronkhorst, Curt [Los Alamos National Laboratory; Beyerlein, Irene [Los Alamos National Laboratory; Cerreta, E. K. [Los Alamos National Laboratory; Dennis-Koller, Darcie [Los Alamos National Laboratory

    2010-12-23

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. The elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.

  19. Expected packing density allows prediction of both amyloidogenic and disordered regions in protein chains

    International Nuclear Information System (INIS)

    Galzitskaya, Oxana V; Garbuzynskiy, Sergiy O; Lobanov, Michail Yu

    2007-01-01

    The determination of factors that influence conformational changes in proteins is very important for the identification of potentially amyloidogenic and disordered regions in polypeptide chains. In our work we introduce a new parameter, mean packing density, to detect both amyloidogenic and disordered regions in a protein sequence. It has been shown that regions with strong expected packing density are responsible for amyloid formation. Our predictions are consistent with known disease-related amyloidogenic regions for 9 of 12 amyloid-forming proteins and peptides in which the positions of amyloidogenic regions have been revealed experimentally. Our findings support the concept that the mechanism of formation of amyloid fibrils is similar for different peptides and proteins. Moreover, we have demonstrated that regions with weak expected packing density are responsible for the appearance of disordered regions. Our method has been tested on datasets of globular proteins and long disordered protein segments, and it shows improved performance over other widely used methods. Thus, we demonstrate that the expected packing density is a useful value for predicting both disordered and amyloidogenic regions of a protein based on sequence alone. Our results are important for understanding the structural characteristics of protein folding and misfolding

  20. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  1. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  2. A kinetic approach to modeling the manufacture of high density strucutral foam: Foaming and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Mondy, Lisa Ann [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Noble, David R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Brunini, Victor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Roberts, Christine Cardinal [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Long, Kevin Nicholas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Soehnel, Melissa Marie [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Celina, Mathias C. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Wyatt, Nicholas B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Thompson, Kyle R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Tinsley, James

    2015-09-01

    We are studying PMDI polyurethane with a fast catalyst, such that filling and polymerization occur simultaneously. The foam is over-packed to tw ice or more of its free rise density to reach the density of interest. Our approach is to co mbine model development closely with experiments to discover new physics, to parameterize models and to validate the models once they have been developed. The model must be able to repres ent the expansion, filling, curing, and final foam properties. PMDI is chemically blown foam, wh ere carbon dioxide is pr oduced via the reaction of water and isocyanate. The isocyanate also re acts with polyol in a competing reaction, which produces the polymer. A new kinetic model is developed and implemented, which follows a simplified mathematical formalism that decouple s these two reactions. The model predicts the polymerization reaction via condensation chemis try, where vitrification and glass transition temperature evolution must be included to correctly predict this quantity. The foam gas generation kinetics are determined by tracking the molar concentration of both water and carbon dioxide. Understanding the therma l history and loads on the foam due to exothermicity and oven heating is very important to the results, since the kinetics and ma terial properties are all very sensitive to temperature. The conservation eq uations, including the e quations of motion, an energy balance, and thr ee rate equations are solved via a stabilized finite element method. We assume generalized-Newtonian rheology that is dependent on the cure, gas fraction, and temperature. The conservation equations are comb ined with a level set method to determine the location of the free surface over time. Results from the model are compared to experimental flow visualization data and post-te st CT data for the density. Seve ral geometries are investigated including a mock encapsulation part, two configur ations of a mock stru ctural part, and a bar geometry to

  3. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  4. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  5. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  6. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  7. Variable-density numerical modeling of seawater intrusion in coastal aquifer with well-developed conduits

    Science.gov (United States)

    Xu, Z.; Hu, B. X.

    2015-12-01

    Karst aquifer is an important drinking water supply for nearly 25% of the world's population. Well-developed subground conduit systems usually can be found in a well-developed karst aquifer, as a dual permeability system. Hydraulic characteristics of non-laminar flow in conduits could be significantly different from darcian flow in porous medium; therefore, hybrid model and different governing equations are necessary in numerical modeling of karst hydrogeology. On the other hand, seawater intrusion has been observed and studied for several decades, also become a worldwidely problem due to groundwater over-pumping and rising sea level. The density difference between freshwater and seawater is recognized as the major factor governing the movements of two fluids in coastal aquifer. Several models have been developed to simulate groundwater flow in karst aquifer, but hardly describe seawater intrusion through the conduits without coupling variable density flow and solute transport. In this study, a numerical SEAWAT model has been developed to simulate variable density flow and transport in heterogeneous karst aquifer. High-density seawater is verified to intrude further inland through high permeability conduit network rather than porous medium. The numerical model also predicts the effect of different cases on seawater intrusion in coastal karst aquifer, such as rising sea level, tide stages and freshwater discharge effects. A series of local and global uncertainty analysis have been taken to evaluate the sensitivity of hydraulic conductivity, porosity, groundwater pumping, sea level, salinity and dispersivity. Heterogeneous conduit and porous medium hydraulic characteristics play an important role in groundwater flow and solute transport simulation. Meanwhile, another hybrid model VDFST-CFP model is currently under development to couple turbulent conduit flow and variable density groundwater flow in porous media, which provides a new method and better description in

  8. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  9. Cyclone-track based seasonal prediction for South Pacific tropical cyclone activity using APCC multi-model ensemble prediction

    Science.gov (United States)

    Kim, Ok-Yeon; Chan, Johnny C. L.

    2018-01-01

    This study aims to predict the seasonal TC track density over the South Pacific by combining the Asia-Pacific Economic Cooperation (APEC) Climate Center (APCC) multi-model ensemble (MME) dynamical prediction system with a statistical model. The hybrid dynamical-statistical model is developed for each of the three clusters that represent major groups of TC best tracks in the South Pacific. The cross validation result from the MME hybrid model demonstrates moderate but statistically significant skills to predict TC numbers across all TC clusters, with correlation coefficients of 0.4 to 0.6 between the hindcasts and observations for 1982/1983 to 2008/2009. The prediction skill in the area east of about 170°E is significantly influenced by strong El Niño, whereas the skill in the southwest Pacific region mainly comes from the linear trend of TC number. The prediction skill of TC track density is particularly high in the region where there is climatological high TC track density around the area 160°E-180° and 20°S. Since this area has a mixed response with respect to ENSO, the prediction skill of TC track density is higher in non-ENSO years compared to that in ENSO years. Even though the cross-validation prediction skill is higher in the area east of about 170°E compared to other areas, this region shows less skill for track density based on the categorical verification due to huge influences by strong El Niño years. While prediction skill of the developed methodology varies across the region, it is important that the model demonstrates skill in the area where TC activity is high. Such a result has an important practical implication—improving the accuracy of seasonal forecast and providing communities at risk with advanced information which could assist with preparedness and disaster risk reduction.

  10. An exospheric temperature model from CHAMP thermospheric density

    Science.gov (United States)

    Weng, Libin; Lei, Jiuhou; Sutton, Eric; Dou, Xiankang; Fang, Hanxian

    2017-02-01

    In this study, the effective exospheric temperature, named as T∞, derived from thermospheric densities measured by the CHAMP satellite during 2002-2010 was utilized to develop an exospheric temperature model (ETM) with the aid of the NRLMSISE-00 model. In the ETM, the temperature variations are characterized as a function of latitude, local time, season, and solar and geomagnetic activities. The ETM is validated by the independent GRACE measurements, and it is found that T∞ and thermospheric densities from the ETM are in better agreement with the GRACE data than those from the NRLMSISE-00 model. In addition, the ETM captures well the thermospheric equatorial anomaly feature, seasonal variation, and the hemispheric asymmetry in the thermosphere.

  11. Model dependence in the density content of nuclear symmetry energy

    International Nuclear Information System (INIS)

    Mondal, C.; Agrawal, B.K.; Singh, S.K.; Patra, S.K.; Centelles, M.; Viñas, X.; Colò, G.; Roca-Maza, X.; Paar, N.

    2014-01-01

    Apart from very few light nuclei, all nuclear systems in nature, starting from tiny finite nuclei to huge astrophysical objects like neutron stars, are asymmetric. Densities of these systems vary over a wide range. So, accurate knowledge of symmetry energy over a wide range of density is very essential to understand several phenomena in finite nuclei as well as in neutron stars. We have shown using a representative set of systematically varied mean models that the correlation of symmetry energy slope parameter with the neutron skin thickness in 208 Pb nucleus has a noticeable amount of model dependence. The investigations in order to unveil the source of the model dependence in such correlations are underway

  12. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  13. Fire spread in chaparral – a comparison of laboratory data and model predictions in burning live fuels

    Science.gov (United States)

    David R. Weise; Eunmo Koo; Xiangyang Zhou; Shankar Mahalingam; Frédéric Morandini; Jacques-Henri Balbi

    2016-01-01

    Fire behaviour data from 240 laboratory fires in high-density live chaparral fuel beds were compared with model predictions. Logistic regression was used to develop a model to predict fire spread success in the fuel beds and linear regression was used to predict rate of spread. Predictions from the Rothermel equation and three proposed changes as well as two physically...

  14. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  15. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  16. Chromospheric extents predicted by time-dependent acoustic wave models

    Science.gov (United States)

    Cuntz, Manfred

    1990-01-01

    Theoretical models for chromospheric structures of late-type giant stars are computed, including the time-dependent propagation of acoustic waves. Models with short-period monochromatic shock waves as well as a spectrum of acoustic waves are discussed, and the method is applied to the stars Arcturus, Aldebaran, and Betelgeuse. Chromospheric extent, defined as the monotonic decrease with height of the time-averaged electron densities, are found to be 1.12, 1.13, and 1.22 stellar radii for the three stars, respectively; this corresponds to a time-averaged electron density of 10 to the 7th/cu cm. Predictions of the extended chromospheric obtained using a simple scaling law agree well with those obtained by the time-dependent wave models; thus, the chromospheres of all stars for which the scaling law is valid consist of the same number of pressure scale heights.

  17. Chromospheric extents predicted by time-dependent acoustic wave models

    Energy Technology Data Exchange (ETDEWEB)

    Cuntz, M. (Joint Institute for Laboratory Astrophysics, Boulder, CO (USA) Heidelberg Universitaet (Germany, F.R.))

    1990-01-01

    Theoretical models for chromospheric structures of late-type giant stars are computed, including the time-dependent propagation of acoustic waves. Models with short-period monochromatic shock waves as well as a spectrum of acoustic waves are discussed, and the method is applied to the stars Arcturus, Aldebaran, and Betelgeuse. Chromospheric extent, defined as the monotonic decrease with height of the time-averaged electron densities, are found to be 1.12, 1.13, and 1.22 stellar radii for the three stars, respectively; this corresponds to a time-averaged electron density of 10 to the 7th/cu cm. Predictions of the extended chromospheric obtained using a simple scaling law agree well with those obtained by the time-dependent wave models; thus, the chromospheres of all stars for which the scaling law is valid consist of the same number of pressure scale heights. 74 refs.

  18. The Indigo Molecule Revisited Again: Assessment of the Minnesota Family of Density Functionals for the Prediction of Its Maximum Absorption Wavelengths in Various Solvents

    Directory of Open Access Journals (Sweden)

    Francisco Cervantes-Navarro

    2013-01-01

    Full Text Available The Minnesota family of density functionals (M05, M05-2X, M06, M06L, M06-2X, and M06-HF were evaluated for the calculation of the UV-Vis spectra of the indigo molecule in solvents of different polarities using time-dependent density functional theory (TD-DFT and the polarized continuum model (PCM. The maximum absorption wavelengths predicted for each functional were compared with the known experimental results.

  19. Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation.

    Science.gov (United States)

    Hennies, Nora; Lambon Ralph, Matthew A; Kempkes, Marleen; Cousins, James N; Lewis, Penelope A

    2016-03-30

    Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. Copyright © 2016 Hennies et al.

  20. Spatial conservation prioritization for mobile top predators in French waters: Comparing encounter rates and predicted densities as input

    Science.gov (United States)

    Delavenne, J.; Lepareur, F.; Witté, I.; Touroult, J.; Lambert, C.; Pettex, E.; Virgili, A.; Siblet, J.-P.

    2017-07-01

    EU member states have to develop their Natura 2000 networks in their national waters to fulfill their conservation obligations regarding species and habitats listed in the Birds and Habitats directives. In France, a coastal network of Natura 2000 areas exists since 2008 but it had to be completed in offshore waters for some marine megafauna species. The SAMM aerial surveys (Aerial Census of Marine Megafauna) which occurred in winter 2011 and summer 2011-2012 over a large area comprising the whole metropolitan French Economic Exclusive Zone produced sighting data for species listed in the Birds and Habitats directives. These data produced different types of species distribution data: encounter rates and predicted densities by kriging and habitat modelling. Using these species distribution data, the aim of the present study was to compare these different types of inputs in the same conservation prioritization process to complete the existing Natura 2000 network in French waters. We ran prioritization analyses using the encounter rates only (scenario 1) then using the predicted densities provided by kriging and habitat modelling (scenario 2). We then compared the outputs of the two prioritization processes. The prioritization outputs were different but not in contradiction, with similar areas appearing as important to reach the conservation targets. Habitat models were thought to provide better pictures of seasonal species distributions and informed scientists about the phenology and ecology of species. However, the use of encounter rates as input data for the prioritization process in the Natura 2000 program is acceptable provided that sufficient survey effort is available.

  1. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Insights into plant size-density relationships from models and agricultural crops.

    Science.gov (United States)

    Deng, Jianming; Zuo, Wenyun; Wang, Zhiqiang; Fan, Zhexuan; Ji, Mingfei; Wang, Genxuan; Ran, Jinzhi; Zhao, Changming; Liu, Jianquan; Niklas, Karl J; Hammond, Sean T; Brown, James H

    2012-05-29

    There is general agreement that competition for resources results in a tradeoff between plant mass, M, and density, but the mathematical form of the resulting thinning relationship and the mechanisms that generate it are debated. Here, we evaluate two complementary models, one based on the space-filling properties of canopy geometry and the other on the metabolic basis of resource use. For densely packed stands, both models predict that density scales as M(-3/4), energy use as M(0), and total biomass as M(1/4). Compilation and analysis of data from 183 populations of herbaceous crop species, 473 stands of managed tree plantations, and 13 populations of bamboo gave four major results: (i) At low initial planting densities, crops grew at similar rates, did not come into contact, and attained similar mature sizes; (ii) at higher initial densities, crops grew until neighboring plants came into contact, growth ceased as a result of competition for limited resources, and a tradeoff between density and size resulted in critical density scaling as M(-0.78), total resource use as M(-0.02), and total biomass as M(0.22); (iii) these scaling exponents are very close to the predicted values of M(-3/4), M(0), and M(1/4), respectively, and significantly different from the exponents suggested by some earlier studies; and (iv) our data extend previously documented scaling relationships for trees in natural forests to small herbaceous annual crops. These results provide a quantitative, predictive framework with important implications for the basic and applied plant sciences.

  3. Insights into plant size-density relationships from models and agricultural crops

    Science.gov (United States)

    Deng, Jianming; Zuo, Wenyun; Wang, Zhiqiang; Fan, Zhexuan; Ji, Mingfei; Wang, Genxuan; Ran, Jinzhi; Zhao, Changming; Liu, Jianquan; Niklas, Karl J.; Hammond, Sean T.; Brown, James H.

    2012-01-01

    There is general agreement that competition for resources results in a tradeoff between plant mass, M, and density, but the mathematical form of the resulting thinning relationship and the mechanisms that generate it are debated. Here, we evaluate two complementary models, one based on the space-filling properties of canopy geometry and the other on the metabolic basis of resource use. For densely packed stands, both models predict that density scales as M−3/4, energy use as M0, and total biomass as M1/4. Compilation and analysis of data from 183 populations of herbaceous crop species, 473 stands of managed tree plantations, and 13 populations of bamboo gave four major results: (i) At low initial planting densities, crops grew at similar rates, did not come into contact, and attained similar mature sizes; (ii) at higher initial densities, crops grew until neighboring plants came into contact, growth ceased as a result of competition for limited resources, and a tradeoff between density and size resulted in critical density scaling as M−0.78, total resource use as M−0.02, and total biomass as M0.22; (iii) these scaling exponents are very close to the predicted values of M−3/4, M0, and M1/4, respectively, and significantly different from the exponents suggested by some earlier studies; and (iv) our data extend previously documented scaling relationships for trees in natural forests to small herbaceous annual crops. These results provide a quantitative, predictive framework with important implications for the basic and applied plant sciences. PMID:22586097

  4. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    Science.gov (United States)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  5. Evaluating the Ecological Integrity of Structural Stand Density Management Models Developed for Boreal Conifers

    Directory of Open Access Journals (Sweden)

    Peter F. Newton

    2015-04-01

    Full Text Available Density management decision-support systems (e.g., modular-based structural stand density management models (SSDMMs, which are built upon the modeling platform used to develop stand density management diagrams, incorporate a number of functional relationships derived from forest production theory and quantitative ecology. Empirically, however, the ecological integrity of these systems has not been verified and hence the degree of their compliance with expected ecological axioms is unknown. Consequently, the objective of this study was to evaluate the ecological integrity of six SSDMMs developed for black spruce (Picea mariana and jack pine (Pinus banksiana stand-types (natural-origin and planted upland black spruce and jack pine stands, upland natural-origin black spruce and jack pine mixtures, and natural-origin lowland black spruce stands. The assessment included the determination of the biological reasonableness of model predictions by determining the degree of consistency between predicted developmental patterns and those expected from known ecological axioms derived from even-aged stand dynamics theoretical constructs, employing Bakuzis graphical matrices. Although the results indicated the SSDMMs performed well, a notable departure from expectation was a possible systematic site quality effect on the asymptotic yield-density relationships. Combining these results with confirmatory evidence derived from the literature suggest that the site-invariant self-thinning axiom may be untenable for certain stand-types.

  6. Model dependence of isospin sensitive observables at high densities

    International Nuclear Information System (INIS)

    Guo, Wen-Mei; Yong, Gao-Chan; Wang, Yongjia; Li, Qingfeng; Zhang, Hongfei; Zuo, Wei

    2013-01-01

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π − /π + ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π − /π + ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically

  7. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  8. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  9. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  10. A New Simplified Local Density Model for Adsorption of Pure Gases and Binary Mixtures

    Science.gov (United States)

    Hasanzadeh, M.; Dehghani, M. R.; Feyzi, F.; Behzadi, B.

    2010-12-01

    Adsorption modeling is an important tool for process simulation and design. Many theoretical models have been developed to describe adsorption data for pure and multicomponent gases. The simplified local density (SLD) approach is a thermodynamic model that can be used with any equation of state and offers some predictive capability with adjustable parameters for modeling of slit-shaped pores. In previous studies, the SLD model has been utilized with the Lennard-Jones potential function for modeling of fluid-solid interactions. In this article, we have focused on application of the Sutherland potential function in an SLD-Peng-Robinson model. The advantages and disadvantages of using the new potential function for adsorption of methane, ethane, carbon dioxide, nitrogen, and three binary mixtures on two types of activated carbon are illustrated. The results have been compared with previous models. It is shown that the new SLD model can correlate adsorption data for different pressures and temperatures with minimum error.

  11. Three-dimensional model for multi-component reactive transport with variable density groundwater flow

    Science.gov (United States)

    Mao, X.; Prommer, H.; Barry, D.A.; Langevin, C.D.; Panteleit, B.; Li, L.

    2006-01-01

    PHWAT is a new model that couples a geochemical reaction model (PHREEQC-2) with a density-dependent groundwater flow and solute transport model (SEAWAT) using the split-operator approach. PHWAT was developed to simulate multi-component reactive transport in variable density groundwater flow. Fluid density in PHWAT depends not on only the concentration of a single species as in SEAWAT, but also the concentrations of other dissolved chemicals that can be subject to reactive processes. Simulation results of PHWAT and PHREEQC-2 were compared in their predictions of effluent concentration from a column experiment. Both models produced identical results, showing that PHWAT has correctly coupled the sub-packages. PHWAT was then applied to the simulation of a tank experiment in which seawater intrusion was accompanied by cation exchange. The density dependence of the intrusion and the snow-plough effect in the breakthrough curves were reflected in the model simulations, which were in good agreement with the measured breakthrough data. Comparison simulations that, in turn, excluded density effects and reactions allowed us to quantify the marked effect of ignoring these processes. Next, we explored numerical issues involved in the practical application of PHWAT using the example of a dense plume flowing into a tank containing fresh water. It was shown that PHWAT could model physically unstable flow and that numerical instabilities were suppressed. Physical instability developed in the model in accordance with the increase of the modified Rayleigh number for density-dependent flow, in agreement with previous research. ?? 2004 Elsevier Ltd. All rights reserved.

  12. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  13. Serum total and non-high-density lipoprotein cholesterol and the risk prediction of cardiovascular events - the JALS-ECC -.

    Science.gov (United States)

    Tanabe, Naohito; Iso, Hiroyasu; Okada, Katsutoshi; Nakamura, Yasuyuki; Harada, Akiko; Ohashi, Yasuo; Ando, Takashi; Ueshima, Hirotsugu

    2010-07-01

    Few Japanese studies have compared serum non-high-density lipoprotein (non-HDL) cholesterol with serum total cholesterol as factors for predicting risk of cardiovascular events. Currently, few tools accurately estimate the probability of developing cardiovascular events for the Japanese general population. A total of 22,430 Japanese men and women (aged 40-89 years) without a history of cardiovascular events from 10 community-based cohorts were followed. In an average 7.6-year follow up, 104 individuals experienced acute myocardial infarction (AMI) and 339 experienced stroke. Compared to serum total cholesterol, serum non-HDL cholesterol was more strongly associated with risk of AMI in a dose-response manner (multivariable adjusted incidence rate ratio per 1 SD increment [95% confidence interval] =1.49 [1.24-1.79] and 1.62 [1.35-1.95], respectively). Scoring systems were constructed based on multivariable Poisson regression models for predicting a 5-year probability of developing AMI; the non-HDL cholesterol model was found to have a better predictive ability (area under the receiver operating curve [AUC] =0.825) than the total cholesterol model (AUC =0.815). Neither total nor non-HDL serum cholesterol levels were associated with any stroke subtype. The risk of AMI can be more reliably predicted by serum non-HDL cholesterol than serum total cholesterol. The scoring systems are useful tools to predict risk of AMI. Neither total nor non-HDL serum cholesterol can predict stroke risk in the Japanese general population.

  14. Modelling high density phenomena in hydrogen fibre Z-pinches

    International Nuclear Information System (INIS)

    Chittenden, J.P.

    1990-09-01

    The application of hydrogen fibre Z-pinches to the study of the radiative collapse phenomenon is studied computationally. Two areas of difficulty, the formation of a fully ionized pinch from a cryogenic fibre and the processes leading to collapse termination, are addressed in detail. A zero-D model based on the energy equation highlights the importance of particle end losses and changes in the Coulomb logarithm upon collapse initiation and termination. A 1-D Lagrangian resistive MHD code shows the importance of the changing radial profile shapes, particularly in delaying collapse termination. A 1-D, three fluid MHD code is developed to model the ionization of the fibre by thermal conduction from a high temperature surface corona to the cold core. Rate equations for collisional ionization, 3-body recombination and equilibration are solved in tandem with fluid equations for the electrons, ions and neutrals. Continuum lowering is found to assist ionization at the corona-core interface. The high density plasma phenomena responsible for radiative collapse termination are identified as the self-trapping of radiation and free electron degeneracy. A radiation transport model and computational analogues for the effects of degeneracy upon the equation of state, transport coefficients and opacity are implemented in the 1-D, single fluid model. As opacity increases the emergent spectrum is observed to become increasingly Planckian and a fall off in radiative cooling at small radii and low frequencies occurs giving rise to collapse termination. Electron degeneracy terminates radiative collapse by supplementing the radial pressure gradient until the electromagnetic pinch force is balanced. Collapse termination is found to be a hybrid process of opacity and degeneracy effects across a wide range of line densities with opacity dominant at large line densities but with electron degeneracy becoming increasingly important at lower line densities. (author)

  15. Droplet and bubble nucleation modeled by density gradient theory – cubic equation of state versus saft model

    Directory of Open Access Journals (Sweden)

    Hrubý Jan

    2012-04-01

    Full Text Available The study presents some preliminary results of the density gradient theory (GT combined with two different equations of state (EoS: the classical cubic equation by van der Waals and a recent approach based on the statistical associating fluid theory (SAFT, namely its perturbed-chain (PC modification. The results showed that the cubic EoS predicted for a given surface tension the density profile with a noticeable defect. Bulk densities predicted by the cubic EoS differed as much as by 100 % from the reference data. On the other hand, the PC-SAFT EoS provided accurate results for density profile and both bulk densities in the large range of temperatures. It has been shown that PC-SAFT is a promising tool for accurate modeling of nucleation using the GT. Besides the basic case of a planar phase interface, the spherical interface was analyzed to model a critical cluster occurring either for nucleation of droplets (condensation or bubbles (boiling, cavitation. However, the general solution for the spherical interface will require some more attention due to its numerical difficulty.

  16. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  17. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  18. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  19. Prediction of lumbar spine bone mineral density from the mandibular cortical width in postmenopausal women

    Directory of Open Access Journals (Sweden)

    Ehsan Hekmatin

    2013-01-01

    Full Text Available Background: Osteoporosis is one of the most common bone diseases that is characterized by a generalized reduction of the bone mass. Osteoporotic fractures are associated with morbidity, but can be a predictable condition if early diagnosis is made.The diagnosis is based on the World Health Organization′s (WHO T-score criteria. Panoramic images have been also used to predict low bone mineral density. The aim of the study was to evaluate the prediction of lumbar spine bone mineral density (BMD from the mandibular cortical width in postmenopausal women. Materials and Methods: On the panoramic radiographic images, the mandibular cortical width (MCW was measured by drawing a line parallel to the long axis of the mandible and another line tangential to the inferior border of mandible and a constructed line perpendicular to the tangent intersecting inferior border of mental foramen and analyzed the correlation of recorded MCW with BMD and T-score by using SPSS software and linear regression and bivariate correlation tests. Results: Bivariate correlation showed a significant correlation between BMD and MCW (r = 0.945 (P = 0. 000. There was also a significant correlation between T-score and MCW(r = 0.835 (P = 0. 000. To detect the accurate association between the BMD and MCW and also T-score and MCW, linear regression analyses tests showed two associations to predict the BMD and T-score from MCW with confidence interval of 95%. These associations were as follows: T-score= −7.087 + 1.497 Χ MCW BMD= 0.334 + 0.163 Χ MCW. Conclusion: The MCW is a good index to help the dentists to predict the osteoporosis by panoramic radiographs and have a significant role in patient screening and early diagnosis of osteoporosis.

  20. Thermodynamics predicts density-dependent energy use in organisms and ecological communities.

    Science.gov (United States)

    Yen, Jian D L; Paganin, David M; Thomson, James R; Mac Nally, Ralph

    2015-04-01

    Linking our knowledge of organisms to our knowledge of ecological communities and ecosystems is a key challenge for ecology. Individual size distributions (ISDs) link the size of individual organisms to the structure of ecological communities, so that studying ISDs might provide insight into how organism functioning affects ecosystems. Similarly shaped ISDs among ecosystems, coupled with allometric links between organism size and resource use, suggest the possibility of emergent resource-use patterns in ecological communities. We drew on thermodynamics to develop a maximization principle that predicted both organism and community energy use. These predictions highlighted the importance of density-dependent metabolic rates and were able to explain nonlinear relationships between community energy use and community biomass. We analyzed data on fish community energy use and biomass and found evidence of nonlinear scaling, which was predicted by the thermodynamic principle developed here and is not explained by other theories of ISDs. Detailed measurements of organism energy use will clarify the role of density dependence in driving metabolic rates and will further test our derived thermodynamic principle. Importantly, our study highlights the potential for fundamental links between ecology and thermodynamics.

  1. Prediction of melanoma metastasis by the Shields index based on lymphatic vessel density

    Directory of Open Access Journals (Sweden)

    Metcalfe Chris

    2010-05-01

    Full Text Available Abstract Background Melanoma usually presents as an initial skin lesion without evidence of metastasis. A significant proportion of patients develop subsequent local, regional or distant metastasis, sometimes many years after the initial lesion was removed. The current most effective staging method to identify early regional metastasis is sentinel lymph node biopsy (SLNB, which is invasive, not without morbidity and, while improving staging, may not improve overall survival. Lymphatic density, Breslow's thickness and the presence or absence of lymphatic invasion combined has been proposed to be a prognostic index of metastasis, by Shields et al in a patient group. Methods Here we undertook a retrospective analysis of 102 malignant melanomas from patients with more than five years follow-up to evaluate the Shields' index and compare with existing indicators. Results The Shields' index accurately predicted outcome in 90% of patients with metastases and 84% without metastases. For these, the Shields index was more predictive than thickness or lymphatic density. Alternate lymphatic measurement (hot spot analysis was also effective when combined into the Shields index in a cohort of 24 patients. Conclusions These results show the Shields index, a non-invasive analysis based on immunohistochemistry of lymphatics surrounding primary lesions that can accurately predict outcome, is a simple, useful prognostic tool in malignant melanoma.

  2. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  3. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  4. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  5. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  6. Prediction of Carbohydrate Binding Sites on Protein Surfaces with 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Tsai, Keng-Chang; Jian, Jhih-Wei; Yang, Ei-Wen; Hsu, Po-Chiang; Peng, Hung-Pin; Chen, Ching-Tai; Chen, Jun-Bo; Chang, Jeng-Yih; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Non-covalent protein-carbohydrate interactions mediate molecular targeting in many biological processes. Prediction of non-covalent carbohydrate binding sites on protein surfaces not only provides insights into the functions of the query proteins; information on key carbohydrate-binding residues could suggest site-directed mutagenesis experiments, design therapeutics targeting carbohydrate-binding proteins, and provide guidance in engineering protein-carbohydrate interactions. In this work, we show that non-covalent carbohydrate binding sites on protein surfaces can be predicted with relatively high accuracy when the query protein structures are known. The prediction capabilities were based on a novel encoding scheme of the three-dimensional probability density maps describing the distributions of 36 non-covalent interacting atom types around protein surfaces. One machine learning model was trained for each of the 30 protein atom types. The machine learning algorithms predicted tentative carbohydrate binding sites on query proteins by recognizing the characteristic interacting atom distribution patterns specific for carbohydrate binding sites from known protein structures. The prediction results for all protein atom types were integrated into surface patches as tentative carbohydrate binding sites based on normalized prediction confidence level. The prediction capabilities of the predictors were benchmarked by a 10-fold cross validation on 497 non-redundant proteins with known carbohydrate binding sites. The predictors were further tested on an independent test set with 108 proteins. The residue-based Matthews correlation coefficient (MCC) for the independent test was 0.45, with prediction precision and sensitivity (or recall) of 0.45 and 0.49 respectively. In addition, 111 unbound carbohydrate-binding protein structures for which the structures were determined in the absence of the carbohydrate ligands were predicted with the trained predictors. The overall

  7. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  8. Modeling the impact of the indigenous microbial population on the maximum population density of Salmonella on alfalfa

    NARCIS (Netherlands)

    Rijgersberg, H.; Nierop Groot, M.N.; Tromp, S.O.; Franz, E.

    2013-01-01

    Within a microbial risk assessment framework, modeling the maximum population density (MPD) of a pathogenic microorganism is important but often not considered. This paper describes a model predicting the MPD of Salmonella on alfalfa as a function of the initial contamination level, the total count

  9. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  10. Prediction of Excitation Energies for Conjugated Oligomers and Polymers from Time-Dependent Density Functional Theory

    Science.gov (United States)

    Tao, Jianmin; Tretiak, Sergei; Zhu, Jian-Xin

    2010-01-01

    With technological advances, light-emitting conjugated oligomers and polymers have become competitive candidates in the commercial market of light-emitting diodes for display and other technologies, due to the ultralow cost, light weight, and flexibility. Prediction of excitation energies of these systems plays a crucial role in the understanding of their optical properties and device design. In this review article, we discuss the calculation of excitation energies with time-dependent density functional theory, which is one of the most successful methods in the investigation of the dynamical response of molecular systems to external perturbation, owing to its high computational efficiency.

  11. Modeling a nucleon system: static and dynamical properties - density fluctuations

    International Nuclear Information System (INIS)

    Idier, D.

    1997-01-01

    This thesis sets forth a quasi-particle model for the static and dynamical properties of nuclear matter. This model is based on a scale ratio of quasi-particle to nucleons and the projection of the semi-classical distribution on a coherent Gaussian state basis. The first chapter is dealing with the transport equations, particularly with the Vlasov equation for Wigner distribution function. The second one is devoted to the statics of nuclear matter. Here, the sampling effect upon the nuclear density is treated and the state equation of the Gaussian fluid is compared with that given by Hartree-Fock approximation. We define state equation as the relationship between the nucleon binding energy and density, for a given temperature. The curvature around the state equation minimum of the quasi-particle system is shown to be related to the speed of propagation of density perturbation. The volume energy and the surface properties of a (semi-)infinite nucleon system are derived. For the resultant saturated auto-coherent semi-infinite system of quasi-particles the surface coefficient appearing in the mass formula is extracted as well as the system density profile. The third chapter treats the dynamics of the two-particle residual interactions. The effect of different parameters on relaxation of a nucleon system without a mean field is studied by means of a Eulerian and Lagrangian modeling. The fourth chapter treats the volume instabilities (spinodal decomposition) in nuclear matter. The quasi-particle systems, initially prepared in the spinodal region of the utilized interaction, are set to evolve. It is shown then that the scale ratio acts upon the amount of fluctuations injected in the system. The inhomogeneity degree and a proper time are defined and the role of collisions in the spinodal decomposition as well as that of the initial temperature and density, are investigated. Assuming different effective macroscopic interactions, the influence of quantities as

  12. Ensemble Assimilation Using Three First-Principles Thermospheric Models as a Tool for 72-hour Density and Satellite Drag Forecasts

    Science.gov (United States)

    Hunton, D.; Pilinski, M.; Crowley, G.; Azeem, I.; Fuller-Rowell, T. J.; Matsuo, T.; Fedrizzi, M.; Solomon, S. C.; Qian, L.; Thayer, J. P.; Codrescu, M.

    2014-12-01

    Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by variability in the density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. As the population of satellites in Earth orbit grows, higher space-weather prediction accuracy is required for critical missions, such as accurate catalog maintenance, collision avoidance for manned and unmanned space flight, reentry prediction, satellite lifetime prediction, defining on-board fuel requirements, and satellite attitude dynamics. We describe ongoing work to build a comprehensive nowcast and forecast system for neutral density, winds, temperature, composition, and satellite drag. This modeling tool will be called the Atmospheric Density Assimilation Model (ADAM). It will be based on three state-of-the-art coupled models of the thermosphere-ionosphere running in real-time, using assimilative techniques to produce a thermospheric nowcast. It will also produce, in realtime, 72-hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition. We will review the requirements for the ADAM system, the underlying full-physics models, the plethora of input options available to drive the models, a feasibility study showing the performance of first-principles models as it pertains to satellite-drag operational needs, and review challenges in designing an assimilative space-weather prediction model. The performance of the ensemble assimilative model is expected to exceed the performance of current empirical and assimilative density models.

  13. Dynamic density functional theory of solid tumor growth: Preliminary models

    Directory of Open Access Journals (Sweden)

    Arnaud Chauviere

    2012-03-01

    Full Text Available Cancer is a disease that can be seen as a complex system whose dynamics and growth result from nonlinear processes coupled across wide ranges of spatio-temporal scales. The current mathematical modeling literature addresses issues at various scales but the development of theoretical methodologies capable of bridging gaps across scales needs further study. We present a new theoretical framework based on Dynamic Density Functional Theory (DDFT extended, for the first time, to the dynamics of living tissues by accounting for cell density correlations, different cell types, phenotypes and cell birth/death processes, in order to provide a biophysically consistent description of processes across the scales. We present an application of this approach to tumor growth.

  14. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  15. Net-baryon number fluctuations in the hybrid quark-meson-nucleon model at finite density

    Science.gov (United States)

    Marczenko, Michał; Sasaki, Chihiro

    2018-02-01

    We study the mean-field thermodynamics and the characteristics of the net-baryon number fluctuations at the phase boundaries for the chiral and deconfinement transitions in the hybrid quark-meson-nucleon model. The chiral dynamics is described in the linear sigma model, whereas the quark confinement is manipulated by a medium-dependent modification of the particle distribution functions, where an additional scalar field is introduced. At low temperature and finite baryon density, the model predicts a first-, second-order chiral phase transition, or a crossover, depending on the expectation value of the scalar field, and a first-order deconfinement phase transition. We focus on the influence of the confinement over higher-order cumulants of the net-baryon number density. We find that the cumulants show a substantial enhancement around the chiral phase transition; they are not as sensitive to the deconfinement transition.

  16. A fragment based method for modeling of protein segments into cryo-EM density maps.

    Science.gov (United States)

    Ismer, Jochen; Rose, Alexander S; Tiemann, Johanna K S; Hildebrand, Peter W

    2017-11-13

    Single-particle analysis of electron cryo-microscopy (cryo-EM) is a key technology for elucidation of macromolecular structures. Recent technical advances in hardware and software developments significantly enhanced the resolution of cryo-EM density maps and broadened the applicability and the circle of users. To facilitate modeling of macromolecules into cryo-EM density maps, fast and easy to use methods for modeling are now demanded. Here we investigated and benchmarked the suitability of a classical and well established fragment-based approach for modeling of segments into cryo-EM density maps (termed FragFit). FragFit uses a hierarchical strategy to select fragments from a pre-calculated set of billions of fragments derived from structures deposited in the Protein Data Bank, based on sequence similarly, fit of stem atoms and fit to a cryo-EM density map. The user only has to specify the sequence of the segment and the number of the N- and C-terminal stem-residues in the protein. Using a representative data set of protein structures, we show that protein segments can be accurately modeled into cryo-EM density maps of different resolution by FragFit. Prediction quality depends on segment length, the type of secondary structure of the segment and local quality of the map. Fast and automated calculation of FragFit renders it applicable for implementation of interactive web-applications e.g. to model missing segments, flexible protein parts or hinge-regions into cryo-EM density maps.

  17. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  18. Bayesian inference of a lake water quality model by emulating its posterior density

    Science.gov (United States)

    Dietzel, A.; Reichert, P.

    2014-10-01

    We use a Gaussian stochastic process emulator to interpolate the posterior probability density of a computationally demanding application of the biogeochemical-ecological lake model BELAMO to accelerate statistical inference of deterministic model and error model parameters. The deterministic model consists of a mechanistic description of key processes influencing the mass balance of nutrients, dissolved oxygen, organic particles, and phytoplankton and zooplankton in the lake. This model is complemented by a Gaussian stochastic process to describe the remaining model bias and by Normal, independent observation errors. A small subsample of the Markov chain representing the posterior of the model parameters is propagated through the full model to get model predictions and uncertainty estimates. We expect this approximation to be more accurate at only slightly higher computational costs compared to using a Normal approximation to the posterior probability density and linear error propagation to the results as we did in an earlier paper. The performance of the two techniques is compared for a didactical example as well as for the lake model. As expected, for the didactical example, the use of the emulator led to posterior marginals of the model parameters that are closer to those calculated by Markov chain simulation using the full model than those based on the Normal approximation. For the lake model, the new technique proved applicable without an excessive increase in computational requirements, but we faced challenges in the choice of the design data set for emulator calibration. As the posterior is a scalar function of the parameters, the suggested technique is an alternative to the emulation of a potentially more complex, structured output of the simulation model that allows for the use of a less case-specific emulator. This is at the cost that still the full model has to be used for prediction (which can be done with a smaller, approximately independent subsample

  19. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  20. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  1. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  2. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  3. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  4. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  5. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  6. Dose prediction accuracy of anisotropic analytical algorithm and pencil beam convolution algorithm beyond high density heterogeneity interface

    Directory of Open Access Journals (Sweden)

    Suresh B Rana

    2013-01-01

    Full Text Available Purpose: It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC and anisotropic analytical algorithm (AAA to predict doses beyond high density heterogeneity. Materials and Methods: An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15. Each layer of phantom was assigned in terms of water (first or top, air (second, water (third, bone (fourth, and water (fifth or bottom medium. Depth doses in water (bottom medium were calculated for 100 monitor units (MUs with 6 Megavoltage (MV photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC, and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. Results: AAA′s values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3% and by PBC (up to 6.7% was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. Conclusions: AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface.

  7. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  8. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  9. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  10. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  11. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  12. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  13. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  14. Application of Molecular Dynamics Simulations in Molecular Property Prediction I: Density and Heat of Vaporization

    Science.gov (United States)

    Wang, Junmei; Tingjun, Hou

    2011-01-01

    Molecular mechanical force field (FF) methods are useful in studying condensed phase properties. They are complementary to experiment and can often go beyond experiment in atomic details. Even a FF is specific for studying structures, dynamics and functions of biomolecules, it is still important for the FF to accurately reproduce the experimental liquid properties of small molecules that represent the chemical moieties of biomolecules. Otherwise, the force field may not describe the structures and energies of macromolecules in aqueous solutions properly. In this work, we have carried out a systematic study to evaluate the General AMBER Force Field (GAFF) in studying densities and heats of vaporization for a large set of organic molecules that covers the most common chemical functional groups. The latest techniques, such as the particle mesh Ewald (PME) for calculating electrostatic energies, and Langevin dynamics for scaling temperatures, have been applied in the molecular dynamics (MD) simulations. For density, the average percent error (APE) of 71 organic compounds is 4.43% when compared to the experimental values. More encouragingly, the APE drops to 3.43% after the exclusion of two outliers and four other compounds for which the experimental densities have been measured with pressures higher than 1.0 atm. For heat of vaporization, several protocols have been investigated and the best one, P4/ntt0, achieves an average unsigned error (AUE) and a root-mean-square error (RMSE) of 0.93 and 1.20 kcal/mol, respectively. How to reduce the prediction errors through proper van der Waals (vdW) parameterization has been discussed. An encouraging finding in vdW parameterization is that both densities and heats of vaporization approach their “ideal” values in a synchronous fashion when vdW parameters are tuned. The following hydration free energy calculation using thermodynamic integration further justifies the vdW refinement. We conclude that simple vdW parameterization

  15. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Directory of Open Access Journals (Sweden)

    Xiaohua Wu

    2016-04-01

    Full Text Available The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW caused by electron-electron interaction and charge density wave (CDW caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1 comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2 comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3 SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  16. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xiaohua; Zhang, Xiaoli; Wang, Xianlong [Key Laboratory of Materials Physics, Institute of Solid State Physics, Chinese Academy of Sciences, Hefei 230031 (China); Zeng, Zhi, E-mail: zzeng@theory.issp.ac.cn [Key Laboratory of Materials Physics, Institute of Solid State Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China)

    2016-04-15

    The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene) has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs) based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW) caused by electron-electron interaction and charge density wave (CDW) caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1) comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2) comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3) SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  17. Using Tree Detection Algorithms to Predict Stand Sapwood Area, Basal Area and Stocking Density in Eucalyptus regnans Forest

    Directory of Open Access Journals (Sweden)

    Dominik Jaskierniak

    2015-06-01

    Full Text Available Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR data with as few as 0.9 pulses m−2, we applied a local maximum filtering (LMF method and normalised cut (NCut algorithm to predict stocking density (SDen of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa per hectare and sapwood area (SAHa per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37 to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56 was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50 or a sapwood perimeter index, defined as (BAHaSDen½ (R2 = 0.48.

  18. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  19. Calibration models for density borehole logging - construction report

    International Nuclear Information System (INIS)

    Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.

    1995-10-01

    Two machined blocks of magnesium and aluminum alloys form the basis for Hanford's density models. The blocks provide known densities of 1.780 ± 0.002 g/cm 3 and 2.804 ± 0.002 g/cm 3 for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing

  20. Modelling the broadband propagation of marine mammal echolocation clicks for click-based population density estimates.

    Science.gov (United States)

    von Benda-Beckmann, Alexander M; Thomas, Len; Tyack, Peter L; Ainslie, Michael A

    2018-02-01

    Passive acoustic monitoring with widely-dispersed hydrophones has been suggested as a cost-effective method to monitor population densities of echolocating marine mammals. This requires an estimate of the area around each receiver over which vocalizations are detected-the "effective detection area" (EDA). In the absence of auxiliary measurements enabling estimation of the EDA, it can be modelled instead. Common simplifying model assumptions include approximating the spectrum of clicks by flat energy spectra, and neglecting the frequency-dependence of sound absorption within the click bandwidth (narrowband assumption), rendering the problem amenable to solution using the sonar equation. Here, it is investigated how these approximations affect the estimated EDA and their potential for biasing the estimated density. EDA was estimated using the passive sonar equation, and by applying detectors to simulated clicks injected into measurements of background noise. By comparing model predictions made using these two approaches for different spectral energy distributions of echolocation clicks, but identical click source energy level and detector settings, EDA differed by up to a factor of 2 for Blainville's beaked whales. Both methods predicted relative density bias due to narrowband assumptions ranged from 5% to more than 100%, depending on the species, detector settings, and noise conditions.

  1. Strongly interacting matter at high densities with a soliton model

    Science.gov (United States)

    Johnson, Charles Webster

    1998-12-01

    One of the major goals of modern nuclear physics is to explore the phase diagram of strongly interacting matter. The study of these 'extreme' conditions is the primary motivation for the construction of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory which will accelerate nuclei to a center of mass (c.m.) energy of about 200 GeV/nucleon. From a theoretical perspective, a test of quantum chromodynamics (QCD) requires the expansion of the conditions examined from one phase point to the entire phase diagram of strongly-interacting matter. In the present work we focus attention on what happens when the density is increased, at low excitation energies. Experimental results from the Brookhaven Alternating Gradient Synchrotron (AGS) indicate that this regime may be tested in the 'full stopping' (maximum energy deposition) scenario achieved at the AGS having a c.m. collision energy of about 2.5 GeV/nucleon for two equal- mass heavy nuclei. Since the solution of QCD on nuclear length-scales is computationally prohibitive even on today's most powerful computers, progress in the theoretical description of high densities has come through the application of models incorporating some of the essential features of the full theory. The simplest such model is the MIT bag model. We use a significantly more sophisticated model, a nonlocal confining soliton model developed in part at Kent. This model has proven its value in the calculation of the properties of individual mesons and nucleons. In the present application, the many-soliton problem is addressed with the same model. We describe nuclear matter as a lattice of solitons and apply the Wigner-Seitz approximation to the lattice. This means that we consider spherical cells with one soliton centered in each, corresponding to the average properties of the lattice. The average density is then varied by changing the size of the Wigner-Seitz cell. To arrive at a solution, we need to solve a coupled set of

  2. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  3. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  4. Spin-density functional for exchange anisotropic Heisenberg model

    International Nuclear Information System (INIS)

    Prata, G.N.; Penteado, P.H.; Souza, F.C.; Libero, Valter L.

    2009-01-01

    Ground-state energies for antiferromagnetic Heisenberg models with exchange anisotropy are estimated by means of a local-spin approximation made in the context of the density functional theory. Correlation energy is obtained using the non-linear spin-wave theory for homogeneous systems from which the spin functional is built. Although applicable to chains of any size, the results are shown for small number of sites, to exhibit finite-size effects and allow comparison with exact-numerical data from direct diagonalization of small chains.

  5. Neutron density optimal control of A-1 reactor analoque model

    International Nuclear Information System (INIS)

    Grof, V.

    1975-01-01

    Two applications are described of the optimal control of a reactor analog model. Both cases consider the control of neutron density. Control loops containing the on-line controlled process, the reactor of the first Czechoslovak nuclear power plant A-1, are simulated on an analog computer. Two versions of the optimal control algorithm are derived using modern control theory (Pontryagin's maximum principle, the calculus of variations, and Kalman's estimation theory), the minimum time performance index, and the quadratic performance index. The results of the optimal control analysis are compared with the A-1 reactor conventional control. (author)

  6. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  7. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  8. Secretome Prediction of Two M. tuberculosis Clinical Isolates Reveals Their High Antigenic Density and Potential Drug Targets

    Science.gov (United States)

    Cornejo-Granados, Fernanda; Zatarain-Barrón, Zyanya L.; Cantu-Robles, Vito A.; Mendoza-Vargas, Alfredo; Molina-Romero, Camilo; Sánchez, Filiberto; Del Pozo-Yauner, Luis; Hernández-Pando, Rogelio; Ochoa-Leyva, Adrián

    2017-01-01

    The Excreted/Secreted (ES) proteins play important roles during Mycobacterium tuberculosis invasion, virulence, and survival inside the host and they are a major source of immunogenic proteins. However, the molecular complexity of the bacillus cell wall has made difficult the experimental isolation of the total bacterial ES proteins. Here, we reported the genomes of two Beijing genotype M. tuberculosis clinical isolates obtained from patients from Vietnam (isolate 46) and South Africa (isolate 48). We developed a bioinformatics pipeline to predict their secretomes and observed that ~12% of the genome-encoded proteins are ES, being PE, PE-PGRS, and PPE the most abundant protein domains. Additionally, the Gene Ontology, KEGG pathways and Enzyme Classes annotations supported the expected functions for the secretomes. The ~70% of an experimental secretome compiled from literature was contained in our predicted secretomes, while only the 34–41% of the experimental secretome was contained in the two previously reported secretomes for H37Rv. These results suggest that our bioinformatics pipeline is better to predict a more complete set of ES proteins in M. tuberculosis genomes. The predicted ES proteins showed a significant higher antigenic density measured by Abundance of Antigenic Regions (AAR) value than the non-ES proteins and also compared to random constructed secretomes. Additionally, we predicted the secretomes for H37Rv, H37Ra, and two M. bovis BCG genomes. The antigenic density for BGG and for isolates 46 and 48 was higher than the observed for H37Rv and H37Ra secretomes. In addition, two sets of immunogenic proteins previously reported in patients with tuberculosis also showed a high antigenic density. Interestingly, mice infected with isolate 46 showed a significant lower survival rate than the ones infected with isolate 48 and both survival rates were lower than the one previously reported for the H37Rv in the same murine model. Finally, after a

  9. A Replacement for the Silt Density Index: Permanganate Demand to Predict Reverse Osmosis Membrane Fouling.

    Science.gov (United States)

    1983-10-13

    Chem. 27, 4, 662 (1979). 35. Fane,A.G.,Fell,C.J.D.,Nor,M.T. Ultrafiltration/ Activated Sludge System - Development of a Predictive Model. in...Acid, Tannin , and Lignin in Natural Waters. Water Res. 14, 373 (1980). 85. Willard,H.,Furman,N.H.,Bacon,E.K. A Short Course in Quantitative Analysis, Van

  10. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  11. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  13. Models of asthma: density-equalizing mapping and output benchmarking

    Directory of Open Access Journals (Sweden)

    Fischer Tanja C

    2008-02-01

    Full Text Available Abstract Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis.

  14. Anopheles atroparvus density modeling using MODIS NDVI in a former malarious area in Portugal.

    Science.gov (United States)

    Lourenço, Pedro M; Sousa, Carla A; Seixas, Júlia; Lopes, Pedro; Novo, Maria T; Almeida, A Paulo G

    2011-12-01

    Malaria is dependent on environmental factors and considered as potentially re-emerging in temperate regions. Remote sensing data have been used successfully for monitoring environmental conditions that influence the patterns of such arthropod vector-borne diseases. Anopheles atroparvus density data were collected from 2002 to 2005, on a bimonthly basis, at three sites in a former malarial area in Southern Portugal. The development of the Remote Vector Model (RVM) was based upon two main variables: temperature and the Normalized Differential Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite. Temperature influences the mosquito life cycle and affects its intra-annual prevalence, and MODIS NDVI was used as a proxy for suitable habitat conditions. Mosquito data were used for calibration and validation of the model. For areas with high mosquito density, the model validation demonstrated a Pearson correlation of 0.68 (pNDVI. RVM is a satellite data-based assimilation algorithm that uses temperature fields to predict the intra- and inter-annual densities of this mosquito species using MODIS NDVI. RVM is a relevant tool for vector density estimation, contributing to the risk assessment of transmission of mosquito-borne diseases and can be part of the early warning system and contingency plans providing support to the decision making process of relevant authorities. © 2011 The Society for Vector Ecology.

  15. Systematics of nuclear densities, deformations and excitation energies within the context of the generalized rotation-vibration model

    Energy Technology Data Exchange (ETDEWEB)

    Chamon, L.C., E-mail: luiz.chamon@dfn.if.usp.b [Departamento de Fisica Nuclear, Instituto de Fisica da Universidade de Sao Paulo, Caixa Postal 66318, 05315-970, Sao Paulo, SP (Brazil); Carlson, B.V. [Departamento de Fisica, Instituto Tecnologico de Aeronautica, Centro Tecnico Aeroespacial, Sao Jose dos Campos, SP (Brazil)

    2010-11-30

    We present a large-scale systematics of charge densities, excitation energies and deformation parameters for hundreds of heavy nuclei. The systematics is based on a generalized rotation-vibration model for the quadrupole and octupole modes and takes into account second-order contributions of the deformations as well as the effects of finite diffuseness values for the nuclear densities. We compare our results with the predictions of classical surface vibrations in the hydrodynamical approximation.

  16. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  17. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  18. Parametric Density Recalibration of a Fundamental Market Model to Forecast Electricity Prices

    Directory of Open Access Journals (Sweden)

    Antonio Bello

    2016-11-01

    Full Text Available This paper proposes a new approach to hybrid forecasting methodology, characterized as the statistical recalibration of forecasts from fundamental market price formation models. Such hybrid methods based upon fundamentals are particularly appropriate to medium term forecasting and in this paper the application is to month-ahead, hourly prediction of electricity wholesale prices in Spain. The recalibration methodology is innovative in seeking to perform the recalibration into parametrically defined density functions. The density estimation method selects from a wide diversity of general four-parameter distributions to fit hourly spot prices, in which the first four moments are dynamically estimated as latent functions of the outputs from the fundamental model and several other plausible exogenous drivers. The proposed approach demonstrated its effectiveness against benchmark methods across the full range of percentiles of the price distribution and performed particularly well in the tails.

  19. Tumor Microvessel Density as a Potential Predictive Marker for Bevacizumab Benefit: GOG-0218 Biomarker Analyses.

    Science.gov (United States)

    Bais, Carlos; Mueller, Barbara; Brady, Mark F; Mannel, Robert S; Burger, Robert A; Wei, Wei; Marien, Koen M; Kockx, Mark M; Husain, Amreen; Birrer, Michael J

    2017-11-01

    Combining bevacizumab with frontline chemotherapy statistically significantly improved progression-free survival (PFS) but not overall survival (OS) in the phase III GOG-0218 trial. Evaluation of candidate biomarkers was an exploratory objective. Patients with stage III (incompletely resected) or IV ovarian cancer were randomly assigned to receive six chemotherapy cycles with placebo or bevacizumab followed by single-agent placebo or bevacizumab. Five candidate tumor biomarkers were assessed by immunohistochemistry. The biomarker-evaluable population was categorized into high or low biomarker-expressing subgroups using median and quartile cutoffs. Associations between biomarker expression and efficacy were analyzed. All statistical tests were two-sided. The biomarker-evaluable population (n = 980) comprising 78.5% of the intent-to-treat population had representative baseline characteristics and efficacy outcomes. Neither prognostic nor predictive associations were seen for vascular endothelial growth factor (VEGF) receptor-2, neuropilin-1, or MET. Higher microvessel density (MVD; measured by CD31) showed predictive value for PFS (hazard ratio [HR] for bevacizumab vs placebo = 0.40, 95% confidence interval [CI] = 0.29 to 0.54, vs 0.80, 95% CI = 0.59 to 1.07, for high vs low MVD, respectively, P interaction = .003) and OS (HR = 0.67, 95% CI = 0.51 to 0.88, vs 1.10, 95% CI = 0.84 to 1.44, P interaction = .02). Tumor VEGF-A was not predictive for PFS but showed potential predictive value for OS using a third-quartile cutoff for high VEGF-A expression. These retrospective tumor biomarker analyses suggest a positive association between density of vascular endothelial cells (the predominant cell type expressing VEGF receptors) and tumor VEGF-A levels and magnitude of bevacizumab effect in ovarian cancer. The potential predictive value of MVD (CD31) and tumor VEGF-A is consistent with a mechanism of action driven by VEGF-A signaling blockade. © The

  20. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  1. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  2. Estimation and Modeling of Enceladus Plume Jet Density Using Reaction Wheel Control Data

    Science.gov (United States)

    Lee, Allan Y.; Wang, Eric K.; Pilinski, Emily B.; Macala, Glenn A.; Feldman, Antonette

    2010-01-01

    The Cassini spacecraft was launched on October 15, 1997 by a Titan 4B launch vehicle. After an interplanetary cruise of almost seven years, it arrived at Saturn on June 30, 2004. In 2005, Cassini completed three flybys of Enceladus, a small, icy satellite of Saturn. Observations made during these flybys confirmed the existence of a water vapor plume in the south polar region of Enceladus. Five additional low-altitude flybys of Enceladus were successfully executed in 2008-9 to better characterize these watery plumes. The first of these flybys was the 50-km Enceladus-3 (E3) flyby executed on March 12, 2008. During the E3 flyby, the spacecraft attitude was controlled by a set of three reaction wheels. During the flyby, multiple plume jets imparted disturbance torque on the spacecraft resulting in small but visible attitude control errors. Using the known and unique transfer function between the disturbance torque and the attitude control error, the collected attitude control error telemetry could be used to estimate the disturbance torque. The effectiveness of this methodology is confirmed using the E3 telemetry data. Given good estimates of spacecraft's projected area, center of pressure location, and spacecraft velocity, the time history of the Enceladus plume density is reconstructed accordingly. The 1-sigma uncertainty of the estimated density is 7.7%. Next, we modeled the density due to each plume jet as a function of both the radial and angular distances of the spacecraft from the plume source. We also conjecture that the total plume density experienced by the spacecraft is the sum of the component plume densities. By comparing the time history of the reconstructed E3 plume density with that predicted by the plume model, values of the plume model parameters are determined. Results obtained are compared with those determined by other Cassini science instruments.

  3. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    Energy Technology Data Exchange (ETDEWEB)

    Halligan, Matthew

    2017-11-01

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities are derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.

  4. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  5. Using broad landscape level features to predict redd densities of steelhead trout (Oncorhynchus mykiss) and Chinook Salmon (Oncorhynchus tshawytscha) in the Methow River watershed, Washington

    Science.gov (United States)

    Romine, Jason G.; Perry, Russell W.; Connolly, Patrick J.

    2013-01-01

    We used broad-scale landscape feature variables to model redd densities of spring Chinook salmon (Oncorhynchus tshawytscha) and steelhead trout (Oncorhynchus mykiss) in the Methow River watershed. Redd densities were estimated from redd counts conducted from 2005 to 2007 and 2009 for steelhead trout and 2005 to 2009 for spring Chinook salmon. These densities were modeled using generalized linear mixed models. Variables examined included primary and secondary geology type, habitat type, flow type, sinuosity, and slope of stream channel. In addition, we included spring effect and hatchery effect variables to account for high densities of redds near known springs and hatchery outflows. Variables were associated with National Hydrography Database reach designations for modeling redd densities within each reach. Reaches were assigned a dominant habitat type, geology, mean slope, and sinuosity. The best fit model for spring Chinook salmon included sinuosity, critical slope, habitat type, flow type, and hatchery effect. Flow type, slope, and habitat type variables accounted for most of the variation in the data. The best fit model for steelhead trout included year, habitat type, flow type, hatchery effect, and spring effect. The spring effect, flow type, and hatchery effect variables explained most of the variation in the data. Our models illustrate how broad-scale landscape features may be used to predict spawning habitat over large areas where fine-scale data may be lacking.

  6. Probabilistic predictive modelling of carbon nanocomposites for medical implants design.

    Science.gov (United States)

    Chua, Matthew; Chui, Chee-Kong

    2015-04-01

    Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  8. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  9. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  10. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  11. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  12. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  13. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  14. The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents

    Directory of Open Access Journals (Sweden)

    Ziad Salem

    2014-12-01

    Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.

  15. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  16. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements.

    Science.gov (United States)

    Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.

  17. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  18. Fluid and gyrokinetic modelling of particle transport in plasmas with hollow density profiles

    International Nuclear Information System (INIS)

    Tegnered, D; Oberparleiter, M; Nordman, H; Strand, P

    2016-01-01

    Hollow density profiles occur in connection with pellet fuelling and L to H transitions. A positive density gradient could potentially stabilize the turbulence or change the relation between convective and diffusive fluxes, thereby reducing the turbulent transport of particles towards the center, making the fuelling scheme inefficient. In the present work, the particle transport driven by ITG/TE mode turbulence in regions of hollow density profiles is studied by fluid as well as gyrokinetic simulations. The fluid model used, an extended version of the Weiland transport model, Extended Drift Wave Model (EDWM), incorporates an arbitrary number of ion species in a multi-fluid description, and an extended wavelength spectrum. The fluid model, which is fast and hence suitable for use in predictive simulations, is compared to gyrokinetic simulations using the code GENE. Typical tokamak parameters are used based on the Cyclone Base Case. Parameter scans in key plasma parameters like plasma β, R/L T , and magnetic shear are investigated. It is found that β in particular has a stabilizing effect in the negative R/L n region, both nonlinear GENE and EDWM show a decrease in inward flux for negative R/L n and a change of direction from inward to outward for positive R/L n . This might have serious consequences for pellet fuelling of high β plasmas. (paper)

  19. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  20. Practical steady-state temperature prediction of active embedded chips into high density electronic board

    International Nuclear Information System (INIS)

    Monier-Vinard, Eric; Rogie, Brice; Bissuel, Valentin; Daniel, Olivier; Nguyen, Nhat-Minh; Laraqi, Najib

    2016-01-01

    Printed Wiring Board die embedding technology is an innovative packaging alternative to address a very high degree of integration by stacking multiple core layers housing active chips. Nevertheless this increases the thermal management challenges by concentrating heat dissipation at the heart of the substrate and exacerbates the need of adequate cooling. In order to allow the electronic designers to early analyse the limits of the in-layer power dissipation, depending on the chip location inside the board, various analytical thermal modelling approaches were investigated. Therefore the buried active chips can be represented using surface or volumetric heating sources according with the expected accuracy. Moreover the current work describes the comparison of the volumetric heating source analytical model with the state-of-art numerical detailed models of several embedded chips configurations, and debates about the need or not to simulate in full details the embedded chips as well as the surrounding layers and micro-via structures of the substrate. The results highlight that the thermal behaviour predictions of the analytical model are found to be within ±5% of relative error and so demonstrate their relevance to model an embedded chip and its neighbouring heating chips or components. Further this predictive model proves to be in good agreement with an experimental characterization performed on a thermal test vehicle. To summarize, the developed analytical approach promotes several practical solutions to achieve a more efficient design and to early identify the potential issues of board cooling. (paper)

  1. Use of a mixture statistical model in studying malaria vectors density.

    Science.gov (United States)

    Boussari, Olayidé; Moiroux, Nicolas; Iwaz, Jean; Djènontin, Armel; Bio-Bangana, Sahabi; Corbel, Vincent; Fonton, Noël; Ecochard, René

    2012-01-01

    Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP) is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%). The NPMP model had a good aptitude to predict the observed values and showed that: i) proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii) water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  2. Use of a mixture statistical model in studying malaria vectors density.

    Directory of Open Access Journals (Sweden)

    Olayidé Boussari

    Full Text Available Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%. The NPMP model had a good aptitude to predict the observed values and showed that: i proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  3. Element-specific density profiles in interacting biomembrane models

    International Nuclear Information System (INIS)

    Schneck, Emanuel; Rodriguez-Loureiro, Ignacio; Bertinetti, Luca; Gochev, Georgi; Marin, Egor; Novikov, Dmitri; Konovalov, Oleg

    2017-01-01

    Surface interactions involving biomembranes, such as cell–cell interactions or membrane contacts inside cells play important roles in numerous biological processes. Structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. Here, we work with simplified planar experimental models of membrane surfaces, composed of lipids and lipopolymers. Their interaction is quantified in terms of pressure–distance curves using ellipsometry at controlled dehydrating (interaction) pressures. For selected pressures, their internal structure is investigated by standing-wave x-ray fluorescence (SWXF). This technique yields specific density profiles of the chemical elements P and S belonging to lipid headgroups and polymer chains, as well as counter-ion profiles for charged surfaces. (paper)

  4. Mechanical properties of zirconium alloys and zirconium hydrides predicted from density functional perturbation theory.

    Science.gov (United States)

    Weck, Philippe F; Kim, Eunja; Tikare, Veena; Mitchell, John A

    2015-11-21

    The elastic properties and mechanical stability of zirconium alloys and zirconium hydrides have been investigated within the framework of density functional perturbation theory. Results show that the lowest-energy cubic Pn3[combining macron]m polymorph of δ-ZrH1.5 does not satisfy all the Born requirements for mechanical stability, unlike its nearly degenerate tetragonal P42/mcm polymorph. Elastic moduli predicted with the Voigt-Reuss-Hill approximations suggest that mechanical stability of α-Zr, Zr-alloy and Zr-hydride polycrystalline aggregates is limited by the shear modulus. According to both Pugh's and Poisson's ratios, α-Zr, Zr-alloy and Zr-hydride polycrystalline aggregates can be considered ductile. The Debye temperatures predicted for γ-ZrH, δ-ZrH1.5 and ε-ZrH2 are θD = 299.7, 415.6 and 356.9 K, respectively, while θD = 273.6, 284.2, 264.1 and 257.1 K for the α-Zr, Zry-4, ZIRLO and M5 matrices, i.e. suggesting that Zry-4 possesses the highest micro-hardness among Zr matrices.

  5. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  6. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  7. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  8. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  9. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  10. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  11. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  12. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  13. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  14. A density model based on the Modified Quasichemical Model and applied to the (NaCl + KCl + ZnCl2) liquid

    International Nuclear Information System (INIS)

    Ouzilleau, Philippe; Robelin, Christian; Chartrand, Patrice

    2012-01-01

    Highlights: ► A model for the density of multicomponent inorganic liquids. ► The density model is based on the Modified Quasichemical Model. ► Application to the (NaCl + KCl + ZnCl 2 ) ternary liquid. ► A Kohler–Toop-like asymmetric interpolation method was used. - Abstract: A theoretical model for the density of multicomponent inorganic liquids based on the Modified Quasichemical Model has been presented previously. By introducing in the Gibbs free energy of the liquid phase temperature-dependent molar volume expressions for the pure components and pressure-dependent excess parameters for the binary (and sometimes higher-order) interactions, it is possible to reproduce, and eventually predict, the molar volume and the density of the multicomponent liquid phase using standard interpolation methods. In the present article, this density model is applied to the (NaCl + KCl + ZnCl 2 ) ternary liquid and a Kohler–Toop-like asymmetric interpolation method is used. All available density data for the (NaCl + KCl + ZnCl 2 ) liquid were collected and critically evaluated, and optimized pressure-dependent model parameters have been found. This new volumetric model can be used with Gibbs free energy minimization software, to calculate the molar volume and the density of (NaCl + KCl + ZnCl 2 ) ternary melts.

  15. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  16. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  17. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  18. Density Functional Theory Modeling of Ferrihydrite Nanoparticle Adsorption Behavior

    Science.gov (United States)

    Kubicki, J.

    2016-12-01

    Ferrihydrite is a critical substrate for adsorption of oxyanion species in the environment1. The nanoparticulate nature of ferrihydrite is inherent to its formation, and hence it has been called a "nano-mineral"2. The nano-scale size and unusual composition of ferrihydrite has made structural determination of this phase problematic. Michel et al.3 have proposed an atomic structure for ferrihydrite, but this model has been controversial4,5. Recent work has shown that the Michel et al.3 model structure may be reasonably accurate despite some deficiencies6-8. An alternative model has been proposed by Manceau9. This work utilizes density functional theory (DFT) calculations to model both the structure of ferrihydrite nanoparticles based on the Michel et al. 3 model as refined in Hiemstra8 and the modified akdalaite model of Manceau9. Adsorption energies of carbonate, phosphate, sulfate, chromate, arsenite and arsenate are calculated. Periodic projector-augmented planewave calculations were performed with the Vienna Ab-initio Simulation Package (VASP10) on an approximately 1.7 nm diameter Michel nanoparticle (Fe38O112H110) and on a 2 nm Manceau nanoparticle (Fe38O95H76). After energy minimization of the surface H and O atoms. The model will be used to assess the possible configurations of adsorbed oxyanions on the model nanoparticles. Brown G.E. Jr. and Calas G. (2012) Geochemical Perspectives, 1, 483-742. Hochella M.F. and Madden A.S. (2005) Elements, 1, 199-203. Michel, F.M., Ehm, L., Antao, S.M., Lee, P.L., Chupas, P.J., Liu, G., Strongin, D.R., Schoonen, M.A.A., Phillips, B.L., and Parise, J.B., 2007, Science, 316, 1726-1729. Rancourt, D.G., and Meunier, J.F., 2008, American Mineralogist, 93, 1412-1417. Manceau, A., 2011, American Mineralogist, 96, 521-533. Maillot, F., Morin, G., Wang, Y., Bonnin, D., Ildefonse, P., Chaneac, C., Calas, G., 2011, Geochimica et Cosmochimica Acta, 75, 2708-2720. Pinney, N., Kubicki, J.D., Middlemiss, D.S., Grey, C.P., and Morgan, D

  19. Density (dis)economies in transportation: revisiting the core-periphery model

    OpenAIRE

    Carl Gaigne; Kristian Behrens

    2006-01-01

    We study how density (dis)economies in interregional transportation influence location patterns in a standard new economic geography model. Density economies may well delay the occurrence of agglomeration when compared to the case without such economies, while agglomeration is both more likely and more gradual under density diseconomies than under density economies.

  20. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    Science.gov (United States)

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  1. Integrating a human thermoregulatory model with a clothing model to predict core and skin temperatures.

    Science.gov (United States)

    Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen

    2017-05-01

    This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Can we Predict Quantum Yields Using Excited State Density Functional Theory for New Families of Fluorescent Dyes?

    Science.gov (United States)

    Kohn, Alexander W.; Lin, Zhou; Shepherd, James J.; Van Voorhis, Troy

    2016-06-01

    For a fluorescent dye, the quantum yield characterizes the efficiency of energy transfer from the absorbed light to the emitted fluorescence. In the screening among potential families of dyes, those with higher quantum yields are expected to have more advantages. From the perspective of theoreticians, an efficient prediction of the quantum yield using a universal excited state electronic structure theory is in demand but still challenging. The most representative examples for such excited state theory include time-dependent density functional theory (TDDFT) and restricted open-shell Kohn-Sham (ROKS). In the present study, we explore the possibility of predicting the quantum yields for conventional and new families of organic dyes using a combination of TDDFT and ROKS. We focus on radiative (kr) and nonradiative (knr) rates for the decay of the first singlet excited state (S_1) into the ground state (S_0) in accordance with Kasha's rule. M. Kasha, Discuss. Faraday Soc., 9, 14 (1950). For each dye compound, kr is calculated with the S_1-S_0 energy gap and transition dipole moment obtained using ROKS and TDDFT respectively at the relaxed S_1 geometry. Our predicted kr agrees well with the experimental value, so long as the order of energy levels is correctly predicted. Evaluation of knr is less straightforward as multiple processes are involved. Our study focuses on the S_1-T_1 intersystem crossing (ISC) and the S_1-S_0 internal conversion (IC): we investigate the properties that allow us to model the knr value using a Marcus-like expression, such as the Stokes shift, the reorganization energy, and the S_1-T_1 and S_1-S_0 energy gaps. Taking these factors into consideration, we compare our results with those obtained using the actual Marcus theory and provide explanation for discrepancy. T. Kowalczyk, T. Tsuchimochi, L. Top, P.-T. Chen, and T. Van Voorhis, J. Chem. Phys., 138, 164101 (2013). M. Kasha, Discuss. Faraday Soc., 9, 14 (1950).

  3. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  4. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  5. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. Spatially-explicit models of global tree density

    Science.gov (United States)

    Glick, Henry B.; Bettigole, Charlie; Maynard, Daniel S.; Covey, Kristofer R.; Smith, Jeffrey R.; Crowther, Thomas W.

    2016-01-01

    Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied on over 420,000 forest inventory field plots from around the world. This research is the result of a collaborative effort engaging over 20 scientists and institutions, and capitalizes on an array of analytical strategies. Our spatial data products offer precise estimates of the number of trees at global and biome scales, but should not be used for local-level estimation. At larger scales, these datasets can contribute valuable insight into resource management, ecological modelling efforts, and the quantification of ecosystem services. PMID:27529613

  8. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  9. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    Science.gov (United States)

    Vašina, P; Hytková, T; Eliáš, M

    2009-05-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  10. Participation in high-impact sports predicts bone mineral density in senior olympic athletes.

    Science.gov (United States)

    Leigey, Daniel; Irrgang, James; Francis, Kimberly; Cohen, Peter; Wright, Vonda

    2009-11-01

    Loss of bone mineral density (BMD) and resultant fractures increase with age in both sexes. Participation in resistance or high-impact sports is a known contributor to bone health in young athletes; however, little is known about the effect of participation in impact sports on bone density as people age. To test the hypothesis that high-impact sport participation will predict BMD in senior athletes, this study evaluated 560 athletes during the 2005 National Senior Games (the Senior Olympics). Cross-sectional methods. The athletes completed a detailed health history questionnaire and underwent calcaneal quantitative ultrasound to measure BMD. Athletes were classified as participating in high impact sports (basketball, road race [running], track and field, triathalon, and volleyball) or non-high-impact sports. Stepwise linear regression was used to determine the influence of high-impact sports on BMD. On average, participants were 65.9 years old (range, 50 to 93). There were 298 women (53.2%) and 289 men (51.6%) who participated in high-impact sports. Average body mass index was 25.6 ± 3.9. The quantitative ultrasound-generated T scores, a quantitative measure of BMD, averaged 0.4 ± 1.3 and -0.1 ± 1.4 for the high-impact and non-high-impact groups, respectively. After age, sex, obesity, and use of osteoporosis medication were controlled, participation in high-impact sports was a significant predictor of BMD (R(2) change 3.2%, P participation in high-impact sports positively influenced bone health, even in the oldest athletes. These data imply that high-impact exercise is a vital tool to maintain healthy BMD with active aging.

  11. Quark mass density- and temperature- dependent model for bulk strange quark matter

    OpenAIRE

    al, Yun Zhang et.

    2002-01-01

    It is shown that the quark mass density-dependent model can not be used to explain the process of the quark deconfinement phase transition because the quark confinement is permanent in this model. A quark mass density- and temperature-dependent model in which the quark confinement is impermanent has been suggested. We argue that the vacuum energy density B is a function of temperature. The dynamical and thermodynamical properties of bulk strange quark matter for quark mass density- and temper...

  12. Coronary Artery Calcium Volume and Density: Potential Interactions and Overall Predictive Value: The Multi-Ethnic Study of Atherosclerosis.

    Science.gov (United States)

    Criqui, Michael H; Knox, Jessica B; Denenberg, Julie O; Forbang, Nketi I; McClelland, Robyn L; Novotny, Thomas E; Sandfort, Veit; Waalen, Jill; Blaha, Michael J; Allison, Matthew A

    2017-08-01

    This study sought to determine the possibility of interactions between coronary artery calcium (CAC) volume or CAC density with each other, and with age, sex, ethnicity, the new atherosclerotic cardiovascular disease (ASCVD) risk score, diabetes status, and renal function by estimated glomerular filtration rate, and, using differing CAC scores, to determine the improvement over the ASCVD risk score in risk prediction and reclassification. In MESA (Multi-Ethnic Study of Atherosclerosis), CAC volume was positively and CAC density inversely associated with cardiovascular disease (CVD) events. A total of 3,398 MESA participants free of clinical CVD but with prevalent CAC at baseline were followed for incident CVD events. During a median 11.0 years of follow-up, there were 390 CVD events, 264 of which were coronary heart disease (CHD). With each SD increase of ln CAC volume (1.62), risk of CHD increased 73% (p present). In multivariable Cox models, significant interactions were present for CAC volume with age and ASCVD risk score for both CHD and CVD, and CAC density with ASCVD risk score for CVD. Hazard ratios were generally stronger in the lower risk groups. Receiver-operating characteristic area under the curve and Net Reclassification Index analyses showed better prediction by CAC volume than by Agatston, and the addition of CAC density to CAC volume further significantly improved prediction. The inverse association between CAC density and incident CHD and CVD events is robust across strata of other CVD risk factors. Added to the ASCVD risk score, CAC volume and density provided the strongest prediction for CHD and CVD events, and the highest correct reclassification. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  13. Conifer density within lake catchments predicts fish mercury concentrations in remote subalpine lakes

    Science.gov (United States)

    Eagles-Smith, Collin A.; Herring, Garth; Johnson, Branden L.; Graw, Rick

    2016-01-01

    Remote high-elevation lakes represent unique environments for evaluating the bioaccumulation of atmospherically deposited mercury through freshwater food webs, as well as for evaluating the relative importance of mercury loading versus landscape influences on mercury bioaccumulation. The increase in mercury deposition to these systems over the past century, coupled with their limited exposure to direct anthropogenic disturbance make them useful indicators for estimating how changes in mercury emissions may propagate to changes in Hg bioaccumulation and ecological risk. We evaluated mercury concentrations in resident fish from 28 high-elevation, sub-alpine lakes in the Pacific Northwest region of the United States. Fish total mercury (THg) concentrations ranged from 4 to 438 ng/g wet weight, with a geometric mean concentration (±standard error) of 43 ± 2 ng/g ww. Fish THg concentrations were negatively correlated with relative condition factor, indicating that faster growing fish that are in better condition have lower THg concentrations. Across the 28 study lakes, mean THg concentrations of resident salmonid fishes varied as much as 18-fold among lakes. We used a hierarchal statistical approach to evaluate the relative importance of physiological, limnological, and catchment drivers of fish Hg concentrations. Our top statistical model explained 87% of the variability in fish THg concentrations among lakes with four key landscape and limnological variables: catchment conifer density (basal area of conifers within a lake's catchment), lake surface area, aqueous dissolved sulfate, and dissolved organic carbon. Conifer density within a lake's catchment was the most important variable explaining fish THg concentrations across lakes, with THg concentrations differing by more than 400 percent across the forest density spectrum. These results illustrate the importance of landscape characteristics in controlling mercury bioaccumulation in fish.

  14. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values

    OpenAIRE

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    Objective: The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. Materials and Methods: CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were g...

  15. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast‑growing Eucalyptus forest plantation using airborne LiDAR data

    Science.gov (United States)

    Carlos Alberto Silva; Andrew Thomas Hudak; Carine Klauberg; Lee Alexandre Vierling; Carlos Gonzalez‑Benecke; Samuel de Padua Chaves Carvalho; Luiz Carlos Estraviz Rodriguez; Adrian Cardil

    2017-01-01

    LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m− 2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations...

  16. Modeling NMR chemical shift: A survey of density functional theory approaches for calculating tensor properties.

    Science.gov (United States)

    Sefzik, Travis H; Turco, Domenic; Iuliucci, Robbie J; Facelli, Julio C

    2005-02-17

    The NMR chemical shift, a six-parameter tensor property, is highly sensitive to the position of the atoms in a molecule. To extract structural parameters from chemical shifts, one must rely on theoretical models. Therefore, a high quality group of shift tensors that serve as benchmarks to test the validity of these models is warranted and necessary to highlight existing computational limitations. Here, a set of 102 13C chemical-shift tensors measured in single crystals, from a series of aromatic and saccharide molecules for which neutron diffraction data are available, is used to survey models based on the density functional (DFT) and Hartree-Fock (HF) theories. The quality of the models is assessed by their least-squares linear regression parameters. It is observed that in general DFT outperforms restricted HF theory. For instance, Becke's three-parameter exchange method and mpw1pw91 generally provide the best predicted shieldings for this group of tensors. However, this performance is not universal, as none of the DFT functionals can predict the saccharide tensors better than HF theory. Both the orientations of the principal axis system and the magnitude of the shielding were compared using the chemical-shift distance to evaluate the quality of the calculated individual tensor components in units of ppm. Systematic shortcomings in the prediction of the principal components were observed, but the theory predicts the corresponding isotropic value more accurately. This is because these systematic errors cancel, thereby indicating that the theoretical assessment of shielding predictions based on the isotropic shift should be avoided.

  17. Aerodynamic Models for the Low Density Supersonic Declerator (LDSD) Supersonic Flight Dynamics Test (SFDT)

    Science.gov (United States)

    Van Norman, John W.; Dyakonov, Artem; Schoenenberger, Mark; Davis, Jody; Muppidi, Suman; Tang, Chun; Bose, Deepak; Mobley, Brandon; Clark, Ian

    2015-01-01

    An overview of pre-flight aerodynamic models for the Low Density Supersonic Decelerator (LDSD) Supersonic Flight Dynamics Test (SFDT) campaign is presented, with comparisons to reconstructed flight data and discussion of model updates. The SFDT campaign objective is to test Supersonic Inflatable Aerodynamic Decelerator (SIAD) and large supersonic parachute technologies at high altitude Earth conditions relevant to entry, descent, and landing (EDL) at Mars. Nominal SIAD test conditions are attained by lifting a test vehicle (TV) to 36 km altitude with a large helium balloon, then accelerating the TV to Mach 4 and and 53 km altitude with a solid rocket motor. The first flight test (SFDT-1) delivered a 6 meter diameter robotic mission class decelerator (SIAD-R) to several seconds of flight on June 28, 2014, and was successful in demonstrating the SFDT flight system concept and SIAD-R. The trajectory was off-nominal, however, lofting to over 8 km higher than predicted in flight simulations. Comparisons between reconstructed flight data and aerodynamic models show that SIAD-R aerodynamic performance was in good agreement with pre-flight predictions. Similar comparisons of powered ascent phase aerodynamics show that the pre-flight model overpredicted TV pitch stability, leading to underprediction of trajectory peak altitude. Comparisons between pre-flight aerodynamic models and reconstructed flight data are shown, and changes to aerodynamic models using improved fidelity and knowledge gained from SFDT-1 are discussed.

  18. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  19. Modelling and simulation of double chamber microbial fuel cell. Cell voltage, power density and temperature variation with process parameters

    Energy Technology Data Exchange (ETDEWEB)

    Shankar, Ravi; Mondal, Prasenjit; Chand, Shri [Indian Institute of Technology Roorkee, Uttaranchal (India). Dept. of Chemical Engineering

    2013-11-01

    In the present paper steady state models of a double chamber glucose glutamic acid microbial fuel cell (GGA-MFC) under continuous operation have been developed and solved using Matlab 2007 software. The experimental data reported in a recent literature has been used for the validation of the models. The present models give prediction on the cell voltage and cell power density with 19-44% errors, which is less (up to 20%) than the errors on the prediction of cell voltage made in some recent literature for the same MFC where the effects of the difference in pH and ionic conductivity between anodic and cathodic solutions on cell voltage were not incorporated in model equations. It also describes the changes in anodic and cathodic chamber temperature due to the increase in substrate concentration and cell current density. Temperature profile across the membrane thickness has also been studied. (orig.)

  20. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  1. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  2. Improving the description of collective effects within the combinatorial model of nuclear level densities

    International Nuclear Information System (INIS)

    Hilaire, S.; Girod, M.; Goriely, S.

    2011-01-01

    The combinatorial model of nuclear level densities has now reached a level of accuracy comparable to that of the best global analytical expressions without suffering from the limits imposed by the statistical hypothesis on which the latter expressions rely. In particular, it provides naturally, non Gaussian spin distribution as well as non equipartition of parities which are known to have a significant impact on cross section predictions at low energies. Our first global model developed in Ref. 1 suffered from deficiencies, in particular in the way the collective effects - both vibrational and rotational - were treated. We have recently improved this treatment using simultaneously the single particle levels and collective properties predicted by a newly derived Gogny interaction, therefore enabling a microscopic description of energy-dependent shell, pairing and deformation effects. In addition, for deformed nuclei, the transition to sphericity is coherently taken into account on the basis of a temperature-dependent Hartree-Fock calculation which provides at each temperature the structure properties needed to build the level densities. This new method is described and shown to give promising preliminary results with respect to available experimental data. (authors)

  3. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  4. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast-growing Eucalyptus forest plantation using airborne LiDAR data.

    Science.gov (United States)

    Silva, Carlos Alberto; Hudak, Andrew Thomas; Klauberg, Carine; Vierling, Lee Alexandre; Gonzalez-Benecke, Carlos; de Padua Chaves Carvalho, Samuel; Rodriguez, Luiz Carlos Estraviz; Cardil, Adrián

    2017-12-01

    LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m -2 (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m. The results show that LiDAR pulse density of 5 pulses m -2 provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m -2 in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system. LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m -2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.

  5. Metal oxide-graphene field-effect transistor: interface trap density extraction model

    Directory of Open Access Journals (Sweden)

    Faraz Najam

    2016-09-01

    Full Text Available A simple to implement model is presented to extract interface trap density of graphene field effect transistors. The presence of interface trap states detrimentally affects the device drain current–gate voltage relationship Ids–Vgs. At the moment, there is no analytical method available to extract the interface trap distribution of metal-oxide-graphene field effect transistor (MOGFET devices. The model presented here extracts the interface trap distribution of MOGFET devices making use of available experimental capacitance–gate voltage Ctot–Vgs data and a basic set of equations used to define the device physics of MOGFET devices. The model was used to extract the interface trap distribution of 2 experimental devices. Device parameters calculated using the extracted interface trap distribution from the model, including surface potential, interface trap charge and interface trap capacitance compared very well with their respective experimental counterparts. The model enables accurate calculation of the surface potential affected by trap charge. Other models ignore the effect of trap charge and only calculate the ideal surface potential. Such ideal surface potential when used in a surface potential based drain current model will result in an inaccurate prediction of the drain current. Accurate calculation of surface potential that can later be used in drain current model is highlighted as a major advantage of the model.

  6. Inverse modeling with RZWQM2 to predict water quality

    Science.gov (United States)

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  7. Remote sensing of the plasmasphere mass density using ground magnetometers and the FLIP model

    Science.gov (United States)

    Zesta, Eftyhia; Chi, Peter; Moldwin, Mark; Jorgensen, Anders; Richards, Phil; Boudouridis, Athanasios; Duffy, Jared

    2012-07-01

    The SAMBA (South American Meridional B-field Array) chain is a Southern Hemisphere meridional chain of 12 magnetometers, 11 of them at L=1.1 to L=2.5 along the coast of Chile and in the Antarctica peninsula, and one auroral station along the same meridian. SAMBA is ideal for low and mid-latitude studies of geophysical events and ULF waves. The MEASURE (Magnetometers along the Eastern Atlantic Seaboard for Undergraduate Research and Education) and McMAC (Mid-continent Magnetoseismic Chain) chains are Northern Hemisphere meridional chains in the same local time as SAMBA, but cover low to sub-auroral latitudes. SAMBA is partly conjugate to MEASURE and McMAC chains, offering unique opportunities for inter-hemispheric studies. We use 5 of the SAMBA stations and an even larger number of conjugate stations from the Northern hemisphere to determine the field line resonance (FLR) frequency of closely spaced flux tubes in the inner magnetosphere. Standard inversion techniques are used to derive the equatorial mass density of these flux tubes from the FLRs. We thus yield the mass density distribution of the plasmasphere for specific events and compare our results with results from the FLIP thermosphere-ionosphere model model. We find that for moderate activity the model determined FLR radial distribution is in excellent agreement with the observed distribution. During storm time observations indicate stronger depletion than predicted by the model initial runs.

  8. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    Science.gov (United States)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  9. A dislocation density based micromechanical constitutive model for Sn-Ag-Cu solder alloys

    Science.gov (United States)

    Liu, Lu; Yao, Yao; Zeng, Tao; Keer, Leon M.

    2017-10-01

    Based on the dislocation density hardening law, a micromechanical model considering the effects of precipitates is developed for Sn-Ag-Cu solder alloys. According to the microstructure of the Sn-3.0Ag-0.5Cu thin films, intermetallic compounds (IMCs) are assumed as sphere particles embedded in the polycrystalline β-Sn matrix. The mechanical behavior of polycrystalline β-Sn matrix is determined by the elastic-plastic self-consistent method. The existence of IMCs not only impedes the motion of dislocations but also increases the overall stiffness. Thus, a dislocation density based hardening law considering non-shearable precipitates is adopted locally for single β-Sn crystal, and the Mori-Tanaka scheme is applied to describe the overall viscoplastic behavior of solder alloys. The proposed model is incorporated into finite element analysis and the corresponding numerical implementation method is presented. The model can describe the mechanical behavior of Sn-3.0Ag-0.5Cu and Sn-1.0Ag-0.5Cu alloys under high strain rates at a wide range of temperatures. Furthermore, the overall Young’s modulus changes due to different contents of IMCs is predicted and compared with experimental data. Results show that the proposed model can describe both elastic and inelastic behavior of solder alloys with reasonable accuracy.

  10. Predicting Local Dengue Transmission in Guangzhou, China, through the Influence of Imported Cases, Mosquito Density and Climate Variability

    Science.gov (United States)

    Sang, Shaowei; Yin, Wenwu; Bi, Peng; Zhang, Honglong; Wang, Chenggang; Liu, Xiaobo; Chen, Bin; Yang, Weizhong; Liu, Qiyong

    2014-01-01

    Introduction Each year there are approximately 390 million dengue infections worldwide. Weather variables have a significant impact on the transmission of Dengue Fever (DF), a mosquito borne viral disease. DF in mainland China is characterized as an imported disease. Hence it is necessary to explore the roles of imported cases, mosquito density and climate variability in dengue transmission in China. The study was to identify the relationship between dengue occurrence and possible risk factors and to develop a predicting model for dengue’s control and prevention purpose. Methodology and Principal Findings Three traditional suburbs and one district with an international airport in Guangzhou city were selected as the study areas. Autocorrelation and cross-correlation analysis were used to perform univariate analysis to identify possible risk factors, with relevant lagged effects, associated with local dengue cases. Principal component analysis (PCA) was applied to extract principal components and PCA score was used to represent the original variables to reduce multi-collinearity. Combining the univariate analysis and prior knowledge, time-series Poisson regression analysis was conducted to quantify the relationship between weather variables, Breteau Index, imported DF cases and the local dengue transmission in Guangzhou, China. The goodness-of-fit of the constructed model was determined by pseudo-R2, Akaike information criterion (AIC) and residual test. There were a total of 707 notified local DF cases from March 2006 to December 2012, with a seasonal distribution from August to November. There were a total of 65 notified imported DF cases from 20 countries, with forty-six cases (70.8%) imported from Southeast Asia. The model showed that local DF cases were positively associated with mosquito density, imported cases, temperature, precipitation, vapour pressure and minimum relative humidity, whilst being negatively associated with air pressure, with different time

  11. Predicting local dengue transmission in Guangzhou, China, through the influence of imported cases, mosquito density and climate variability.

    Directory of Open Access Journals (Sweden)

    Shaowei Sang

    Full Text Available Each year there are approximately 390 million dengue infections worldwide. Weather variables have a significant impact on the transmission of Dengue Fever (DF, a mosquito borne viral disease. DF in mainland China is characterized as an imported disease. Hence it is necessary to explore the roles of imported cases, mosquito density and climate variability in dengue transmission in China. The study was to identify the relationship between dengue occurrence and possible risk factors and to develop a predicting model for dengue's control and prevention purpose.Three traditional suburbs and one district with an international airport in Guangzhou city were selected as the study areas. Autocorrelation and cross-correlation analysis were used to perform univariate analysis to identify possible risk factors, with relevant lagged effects, associated with local dengue cases. Principal component analysis (PCA was applied to extract principal components and PCA score was used to represent the original variables to reduce multi-collinearity. Combining the univariate analysis and prior knowledge, time-series Poisson regression analysis was conducted to quantify the relationship between weather variables, Breteau Index, imported DF cases and the local dengue transmission in Guangzhou, China. The goodness-of-fit of the constructed model was determined by pseudo-R2, Akaike information criterion (AIC and residual test. There were a total of 707 notified local DF cases from March 2006 to December 2012, with a seasonal distribution from August to November. There were a total of 65 notified imported DF cases from 20 countries, with forty-six cases (70.8% imported from Southeast Asia. The model showed that local DF cases were positively associated with mosquito density, imported cases, temperature, precipitation, vapour pressure and minimum relative humidity, whilst being negatively associated with air pressure, with different time lags.Imported DF cases and mosquito

  12. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  13. Novel modeling of combinatorial miRNA targeting identifies SNP with potential role in bone density.

    Directory of Open Access Journals (Sweden)

    Claudia Coronnello

    Full Text Available MicroRNAs (miRNAs are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting, a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential

  14. Density-Based Multilevel Hartree-Fock Model.

    Science.gov (United States)

    Sæther, Sandra; Kjærgaard, Thomas; Koch, Henrik; Høyvik, Ida-Marie

    2017-11-14

    We introduce a density-based multilevel Hartree-Fock (HF) method where the electronic density is optimized in a given region of the molecule (the active region). Active molecular orbitals (MOs) are generated by a decomposition of a starting guess atomic orbital (AO) density, whereas the inactive MOs (which constitute the remainder of the density) are never generated or referenced. The MO formulation allows for a significant dimension reduction by transforming from the AO basis to the active MO basis. All interactions between the inactive and active regions of the molecule are retained, and an exponential parametrization of orbital rotations ensures that the active and inactive density matrices separately, and in sum, satisfy the symmetry, trace, and idempotency requirements. Thus, the orbital spaces stay orthogonal, and furthermore, the total density matrix represents a single Slater determinant. In each iteration, the (level-shifted) Newton equations in the active MO basis are solved to obtain the orbital transformation matrix. The approach is equivalent to variationally optimizing only a subset of the MOs of the total system. In this orbital space partitioning, no bonds are broken and no a priori orbital assignments are carried out. In the limit of including all orbitals in the active space, we obtain an MO density-based formulation of full HF.

  15. Automatic prediction of catalytic residues by modeling residue structural neighborhood

    Directory of Open Access Journals (Sweden)

    Passerini Andrea

    2010-03-01

    Full Text Available Abstract Background Prediction of catalytic residues is a major step in characterizing the function of enzymes. In its simpler formulation, the problem can be cast into a binary classification task at the residue level, by predicting whether the residue is directly involved in the catalytic process. The task is quite hard also when structural information is available, due to the rather wide range of roles a functional residue can play and to the large imbalance between the number of catalytic and non-catalytic residues. Results We developed an effective representation of structural information by modeling spherical regions around candidate residues, and extracting statistics on the properties of their content such as physico-chemical properties, atomic density, flexibility, presence of water molecules. We trained an SVM classifier combining our features with sequence-based information and previously developed 3D features, and compared its performance with the most recent state-of-the-art approaches on different benchmark datasets. We further analyzed the discriminant power of the information provided by the presence of heterogens in the residue neighborhood. Conclusions Our structure-based method achieves consistent improvements on all tested datasets over both sequence-based and structure-based state-of-the-art approaches. Structural neighborhood information is shown to be responsible for such results, and predicting the presence of nearby heterogens seems to be a promising direction for further improvements.

  16. Model Insensitive and Calibration Independent Method for Determination of the Downstream Neutral Hydrogen Density Through Ly-alpha Glow Observations

    Science.gov (United States)

    Gangopadhyay, P.; Judge, D. L.

    1996-01-01

    Our knowledge of the various heliospheric phenomena (location of the solar wind termination shock, heliopause configuration and very local interstellar medium parameters) is limited by uncertainties in the available heliospheric plasma models and by calibration uncertainties in the observing instruments. There is, thus, a strong motivation to develop model insensitive and calibration independent methods to reduce the uncertainties in the relevant heliospheric parameters. We have developed such a method to constrain the downstream neutral hydrogen density inside the heliospheric tail. In our approach we have taken advantage of the relative insensitivity of the downstream neutral hydrogen density profile to the specific plasma model adopted. We have also used the fact that the presence of an asymmetric neutral hydrogen cavity surrounding the sun, characteristic of all neutral densities models, results in a higher multiple scattering contribution to the observed glow in the downstream region than in the upstream region. This allows us to approximate the actual density profile with one which is spatially uniform for the purpose of calculating the downstream backscattered glow. Using different spatially constant density profiles, radiative transfer calculations are performed, and the radial dependence of the predicted glow is compared with the observed I/R dependence of Pioneer 10 UV data. Such a comparison bounds the large distance heliospheric neutral hydrogen density in the downstream direction to a value between 0.05 and 0.1/cc.

  17. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  18. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  19. [Establishment of an artificial neural network model for analysis of the influence of climate factors on the density of Aedes albopictus].

    Science.gov (United States)

    Yu, De-xian; Lin, Li-feng; Luo, Lei; Zhou, Wen; Gao, Lu-lu; Chen, Qing; Yu, Shou-yi

    2010-07-01

    To establish a model for predicting the density of Aedes albopictus based on the climate factors. The data of Aedes albopictus density and climate changes from 1995 to 2001 in Guangzhou were collected and analyzed. The predicting model for Aedes albopictus density was established using the Artificial Neural Network Toolbox of Matlab 7.0 software package. The climate factors used to establish the model included the average monthly pressure, evaporation capacity, relative humidity, sunshine hour, temperature, wind speed, and precipitation, and the established model was tested and verified. The BP network model was established according to data of mosquito density and climate factors. After training the neural network for 25 times, the error of performance decreased from 0.305 539 to 2.937 51x10(-14). Verification of the model with the data of mosquito density showed a concordance rate of prediction of 80%. The neural network model based on the climate factors is effective for predicting Aedes albopictus density.

  20. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  1. Regional-scale Predictions of Agricultural N Losses in an Area with a High Livestock Density

    Directory of Open Access Journals (Sweden)

    Carlo Grignani

    2011-02-01

    Full Text Available The quantification of the N losses in territories characterised by intensive animal stocking is of primary importance. The development of simulation models coupled to a GIS, or of simple environmental indicators, is strategic to suggest the best specific management practices. The aims of this work were: a to couple a GIS to a simulation model in order to predict N losses; b to estimate leaching and gaseous N losses from a territory with intensive livestock farming; c to derive a simplified empirical metamodel from the model output that could be used to rank the relative importance of the variables which influence N losses and to extend the results to homogeneous situations. The work was carried out in a 7773 ha area in the Western Po plain in Italy. This area was chosen because it is characterised by intensive animal husbandry and might soon be included in the nitrate vulnerable zones. The high N load, the shallow water table and the coarse type of sub-soil sediments contribute to the vulnerability to N leaching. A CropSyst simulation model was coupled to a GIS, to account for the soil surface N budget. A linear multiple regression approach was used to describe the influence of a series of independent variables on the N leaching, the N gaseous losses (including volatilisation and denitrification and on the sum of the two. Despite the fact that the available GIS was very detailed, a great deal of information necessary to run the model was lacking. Further soil measurements concerning soil hydrology, soil nitrate content and water table depth proved very valuable to integrate the data contained in the GIS in order to produce reliable input for the model. The results showed that the soils influence both the quantity and the pathways of the N losses to a great extent. The ratio between the N losses and the N supplied varied between 20 and 38%. The metamodel shows that manure input always played the most important role in determining the N losses

  2. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  3. Hierarchical spatial capture-recapture models: Modeling population density from stratified populations

    Science.gov (United States)

    Royle, J. Andrew; Converse, Sarah J.

    2014-01-01

    Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.

  4. A fusion model used in subsidence prediction in Taiwan

    Directory of Open Access Journals (Sweden)

    S.-J. Wang

    2015-11-01

    Full Text Available The Taiwan Water Resources Agency uses four techniques to monitor subsidence in Taiwan, namely data from leveling, global positioning system (GPS, multi-level compaction monitoring wells (MCMWs, and interferometry synthetic aperture radar (InSAR. Each data type has advantages and disadvantages and is suitable for different analysis tools. Only MCMW data provide compaction information at different depths in an aquifer system, thus they are adopted in this study. However, the cost of MCMW is high and the number of MCMW is relatively low. Leveling data are thus also adopted due to its high resolution and accuracy. MCMW data provide compaction information at different depths and the experimental data from the wells provide the physical properties. These data are suitable for a physical model. Leveling data have high monitoring density in spatial domain but lack in temporal domain due to the heavy field work. These data are suitable for a black- or grey-box model. Poroelastic theory, which is known to be more conscientious than Terzaghi's consolidation theory, is adopted in this study with the use of MCMW data. Grey theory, which is a widely used grey-box model, is adopted in this study with the use of leveling data. A fusion technique is developed to combine the subsidence predicted results from poroelastic and grey models to obtain a spatially and temporally connected two-dimensional subsidence distribution. The fusion model is successfully applied to subsidence predictions in Changhua, Yunlin, Tainan, and Kaohsiung of Taiwan and obtains good results. A good subsidence model can help the government to make the accurate strategies for land and groundwater resource management.

  5. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  6. Accuracy of density functional theory in the prediction of carbon dioxide adsorbent materials.

    Science.gov (United States)

    Cazorla, Claudio; Shevlin, Stephen A

    2013-04-07

    Density functional theory (DFT) has become the computational method of choice for modeling and characterization of carbon dioxide adsorbents, a broad family of materials which at present are urgently sought after for environmental applications. The description of polar carbon dioxide (CO(2)) molecules in low-coordinated environments like surfaces and porous materials, however, may be challenging for local and semi-local DFT approximations. Here, we present a thorough computational study in which the accuracy of DFT methods in describing the interactions of CO(2) with model alkali-earth-metal (AEM, Ca and Li) decorated carbon structures, namely anthracene (C(14)H(10)) molecules, is assessed. We find that gas-adsorption energies and equilibrium structures obtained with standard (i.e. LDA and GGA), hybrid (i.e. PBE0 and B3LYP) and van der Waals exchange-correlation functionals of DFT dramatically differ from the results obtained with second-order Møller-Plesset perturbation theory (MP2), an accurate computational quantum chemistry method. The major disagreements found can be mostly rationalized in terms of electron correlation errors that lead to wrong charge-transfer and electrostatic Coulomb interactions between CO(2) and AEM-decorated anthracene molecules. Nevertheless, we show that when the concentration of AEM atoms in anthracene is tuned to resemble as closely as possible the electronic structure of AEM-decorated graphene (i.e. an extended two-dimensional material), hybrid exchange-correlation DFT and MP2 methods quantitatively provide similar results.

  7. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Jacques [Univ. of Regensburg (Germany). Inst. for Theorectical Physics; Glesaan, Jonas [Swansea Univ., Swansea U.K.; Verbaarschot, Jacobus [Stony Brook Univ., NY (United States). Dept. of Physics and Astronomy; Zafeiropoulos, Savvas [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Heidelberg Univ. (Germany). Inst. for Theoretische Physik

    2018-04-01

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.

  8. Multiphase modeling of channelized pyroclastic density currents and the effect of confinement on mobility and entrainment

    Science.gov (United States)

    Kubo, A. I.; Dufek, J.

    2017-12-01

    Around explosive volcanic centers such as Mount Saint Helens, pyroclastic density currents (PDCs) pose a great risk to life and property. Understanding of the mobility and dynamics of PDCs and other gravity currents is vital to mitigating hazards of future eruptions. Evidence from pyroclastic deposits at Mount Saint Helens and one-dimensional modeling suggest that channelization of flows effectively increases run out distances. Dense flows are thought to scour and erode the bed leading to confinement for subsequent flows and could result in significant changes to predicted runout distance and mobility. Here, we present the results of three-dimensional multiphase models comparing confined and unconfined flows using simplified geometries. We focus on bed stress conditions as a proxy for conditions that could influence subsequent erosion and self-channelization. We also explore the controls on gas entrainment in all scenarios to determine how confinement impacts the particle concentration gradient, granular interactions, and mobility.

  9. Conditional Density Models Integrating Fuzzy and Probabilistic Representations of Uncertainty

    NARCIS (Netherlands)

    R.J. Almeida e Santos Nogueira (Rui Jorge)

    2014-01-01

    markdownabstract__Abstract__ Conditional density estimation is an important problem in a variety of areas such as system identification, machine learning, artificial intelligence, empirical economics, macroeconomic analysis, quantitative finance and risk management. This work considers the

  10. Viscosity and density models for copper electrorefining electrolytes

    OpenAIRE

    Kalliomäki Taina; Aji Arif T.; Aromaa Jari; Lundström Mari

    2016-01-01

    Viscosity and density are highly important physicochemical properties of copper electrolyte since they affect the purity of cathode copper and energy consumption [1, 2] affecting the mass and heat transfer conditions in the cell [3]. Increasing viscosity and density decreases the rate in which the anode slime falls to the bottom of the cell [4, 5] and lowers the diffusion coefficient of cupric ion (DCu2+) [6]. Decreasing the falling rate of anode slime increases movement of the slime to other...

  11. Predictive Models for Semiconductor Device Design and Processing

    Science.gov (United States)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1998-01-01

    The device feature size continues to be on a downward trend with a simultaneous upward trend in wafer size to 300 mm. Predictive models are needed more than ever before for this reason. At NASA Ames, a Device and Process Modeling effort has been initiated recently with a view to address these issues. Our activities cover sub-micron device physics, process and equipment modeling, computational chemistry and material science. This talk would outline these efforts and emphasize the interaction among various components. The device physics component is largely based on integrating quantum effects into device simulators. We have two parallel efforts, one based on a quantum mechanics approach and the second, a semiclassical hydrodynamics approach with quantum correction terms. Under the first approach, three different quantum simulators are being developed and compared: a nonequlibrium Green's function (NEGF) approach, Wigner function approach, and a density matrix approach. In this talk, results using various codes will be presented. Our process modeling work focuses primarily on epitaxy and etching using first-principles models coupling reactor level and wafer level features. For the latter, we are using a novel approach based on Level Set theory. Sample results from this effort will also be presented.

  12. Population density predicts outcome from out-of-hospital cardiac arrest in Victoria, Australia.

    Science.gov (United States)

    Nehme, Ziad; Andrew, Emily; Cameron, Peter A; Bray, Janet E; Bernard, Stephen A; Meredith, Ian T; Smith, Karen

    2014-05-05

    To examine the impact of population density on incidence and outcome of out-of-hospital cardiac arrest (OHCA). Data were extracted from the Victorian Ambulance Cardiac Arrest Registry for all adult OHCA cases of presumed cardiac aetiology attended by the emergency medical service (EMS) between 1 January 2003 and 31 December 2011. Cases were allocated into one of five population density groups according to their statistical local area: very low density (≤ 10 people/km(2)), low density (11-200 people/km(2)), medium density (201-1000 people/km(2)), high density (1001-3000 people/km(2)), and very high density (> 3000 people/km(2)). Survival to hospital and survival to hospital discharge. The EMS attended 27 705 adult presumed cardiac OHCA cases across 204 Victorian regions. In 12 007 of these (43.3%), resuscitation was attempted by the EMS. Incidence was lower and arrest characteristics were consistently less favourable for lower population density groups. Survival outcomes, including return of spontaneous circulation, survival to hospital and survival to hospital discharge, were significantly poorer in less densely populated groups (P populations, the risk-adjusted odds ratios of surviving to hospital discharge were: low density, 1.88 (95% CI, 1.15-3.07); medium density, 2.49 (95% CI, 1.55-4.02); high density, 3.47 (95% CI, 2.20-5.48) and very high density, 4.32 (95% CI, 2.67-6.99). Population density is independently associated with survival after OHCA, and significant variation in the incidence and characteristics of these events are observed across the state.

  13. Information density converges in dialogue: Towards an information-theoretic model.

    Science.gov (United States)

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Modeling of Materials for Energy Storage: A Challenge for Density Functional Theory

    Science.gov (United States)

    Kaltak, Merzuk; Fernandez-Serra, Marivi; Hybertsen, Mark S.

    Hollandite α-MnO2 is a promising material for rechargeable batteries and is studied extensively in the community because of its interesting tunnel structure and the corresponding large capacity for lithium as well as sodium ions. However, the presence of partially reduced Mn ions due to doping with Ag or during lithiation makes hollandite a challenging system for density functional theory and the conventionally employed PBE+U method. A naive attempt to model the ternary system LixAgyMnO2 with density functionals, similar to those employed for the case y = 0 , fails and predicts a strong monoclinic distortion of the experimentally observed tetragonal unit cell for Ag2Mn8O16. Structure and binding energies are compared with experimental data and show the importance of van der Waals interactions as well as the necessity for an accurate description of the cooperative Jan-Teller effects for silver hollandite AgyMnO2. Based on these observations a ternary phase diagram is calculated allowing to predict the physical and chemical properties of LixAgyMnO2, such as stable stoichiometries, open circuit voltages, the formation of Ag metal and the structural change during lithiation. Department of Energy (DOE) under award #DE-SC0012673.

  15. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  16. Photosensitizer absorption coefficient modeling and necrosis prediction during Photodynamic Therapy.

    Science.gov (United States)

    Salas-García, Irene; Fanjul-Vélez, Félix; Arce-Diego, José Luis

    2012-09-03

    The development of accurate predictive models for Photodynamic Therapy (PDT) has emerged as a valuable tool to adjust the current therapy dosimetry to get an optimal treatment response, and definitely to establish new personal protocols. Several attempts have been made in this way, although the influence of the photosensitizer depletion on the optical parameters has not been taken into account so far. We present a first approach to predict the spatio-temporal variation of the photosensitizer absorption coefficient during PDT applied to dermatological diseases, taking into account the photobleaching of a topical photosensitizer. This permits us to obtain the photons density absorbed by the photosensitizer molecules as the treatment progresses and to determine necrosis maps to estimate the short term therapeutic effects in the target tissue. The model presented also takes into account an inhomogeneous initial photosensitizer distribution, light propagation in biological media and the evolution of the molecular concentrations of different components involved in the photochemical reactions. The obtained results allow to investigate how the photosensitizer depletion during the photochemical reactions affects light absorption by the photosensitizer molecules as the optical radiation propagates through the target tissue, and estimate the necrotic tumor area progression under different treatment conditions. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Modeling root length density of field grown potatoes under different irrigation strategies and soil textures using artificial neural networks

    DEFF Research Database (Denmark)

    Ahmadi, Seyed Hamid; Sepaskhah, Ali Reza; Andersen, Mathias Neumann

    2014-01-01

    Root length density (RLD) is a highly wanted parameter for use in crop growth modeling but difficult to measure under field conditions. Therefore, artificial neural networks (ANNs) were implemented to predict the RLD of field grown potatoes that were subject to three irrigation strategies and three...... soil textures with different soil water status and soil densities. The objectives of the study were to test whether soil textural information, soil water status, and soil density might be used by ANN to simulate RLD at harvest. In the study 63 data pairs were divided into data sets of training (80......) of the eight input variables: soil layer intervals (D), percentages of sand (Sa), silt (Si), and clay (Cl), bulk density of soil layers (Bd), weighted soil moisture deficit during the irrigation strategies period (SMD), geometric mean particle size diameter (dg), and geometric standard deviation (σg...

  18. Assessing the Feasibility of Low-Density LiDAR for Stand Inventory Attribute Predictions in Complex and Managed Forests of Northern Maine, USA

    Directory of Open Access Journals (Sweden)

    Rei Hayashi

    2014-02-01

    Full Text Available The objective of this study was to evaluate the applicability of using a low-density (1–3 points m−2 discrete-return LiDAR (Light Detection and Ranging for predicting maximum tree height, stem density, basal area, quadratic mean diameter and total volume. The research was conducted at the Penobscot Experimental Forest in central Maine, where a range of stand structures and species composition is present and generally representative of northern Maine’s forests. Prediction models were developed utilizing the random forest algorithm that was calibrated using reference data collected in fixed radius circular plots. For comparison, the volume model used two sets of reference data, with one being fixed radius circular plots and the other variable radius plots. Prediction biases were evaluated with respect to five silvicultural treatments and softwood species composition based on the coefficient of determination (R2, root mean square error and mean bias, as well as residual scatter plots. Overall, this study found that LiDAR tended to underestimate maximum tree height and volume. The maximum tree height and volume models had R2 values of 86.9% and 72.1%, respectively. The accuracy of volume prediction was also sensitive to the plot type used. While it was difficult to develop models with a high R2, due to the complexities of Maine’s forest structures and species composition, the results suggest that low density LiDAR can be used as a supporting tool in forest management for this region.

  19. Aerodynamic Models for the Low Density Supersonic Decelerator (LDSD) Test Vehicles

    Science.gov (United States)

    Van Norman, John W.; Dyakonov, Artem; Schoenenberger, Mark; Davis, Jody; Muppidi, Suman; Tang, Chun; Bose, Deepak; Mobley, Brandon; Clark, Ian

    2016-01-01

    An overview of aerodynamic models for the Low Density Supersonic Decelerator (LDSD) Supersonic Flight Dynamics Test (SFDT) campaign test vehicle is presented, with comparisons to reconstructed flight data and discussion of model updates. The SFDT campaign objective is to test Supersonic Inflatable Aerodynamic Decelerator (SIAD) and large supersonic parachute technologies at high altitude Earth conditions relevant to entry, descent, and landing (EDL) at Mars. Nominal SIAD test conditions are attained by lifting a test vehicle (TV) to 36 km altitude with a helium balloon, then accelerating the TV to Mach 4 and 53 km altitude with a solid rocket motor. Test flights conducted in June of 2014 (SFDT-1) and 2015 (SFDT-2) each successfully delivered a 6 meter diameter decelerator (SIAD-R) to test conditions and several seconds of flight, and were successful in demonstrating the SFDT flight system concept and SIAD-R technology. Aerodynamic models and uncertainties developed for the SFDT campaign are presented, including the methods used to generate them and their implementation within an aerodynamic database (ADB) routine for flight simulations. Pre- and post-flight aerodynamic models are compared against reconstructed flight data and model changes based upon knowledge gained from the flights are discussed. The pre-flight powered phase model is shown to have a significant contribution to off-nominal SFDT trajectory lofting, while coast and SIAD phase models behaved much as predicted.

  20. Density heterogeneity of the North American upper mantle from satellite gravity and a regional crustal model

    DEFF Research Database (Denmark)

    Herceg, Matija; Artemieva, Irina; Thybo, Hans

    2014-01-01

    and by introducing variations into the crustal structure which corresponds to the uncertainty of its resolution by highquality and low-quality seismic models. We examine the propagation of these uncertainties into determinations of lithospheric mantle density. Given a relatively small range of expected density......We present a regional model for the density structure of the North American upper mantle. The residual mantle gravity anomalies are based on gravity data derived from the GOCE geopotential models with crustal correction to the gravity field being calculated from a regional crustal model. We analyze...... how uncertainties and errors in the crustal model propagate from crustal densities to mantle residual gravity anomalies and the density model of the upper mantle. Uncertainties in the residual upper (lithospheric) mantle gravity anomalies result from several sources: (i) uncertainties in the velocity-density...

  1. Employing Predictive Spatial Models to Inform Conservation Planning for Seabirds in the Labrador Sea

    Directory of Open Access Journals (Sweden)

    David A. Fifield

    2017-05-01

    Full Text Available Seabirds are vulnerable to incidental harm from human activities in the ocean, and knowledge of their seasonal distribution is required to assess risk and effectively inform marine conservation planning. Significant hydrocarbon discoveries and exploration licenses in the Labrador Sea underscore the need for quantitative information on seabird seasonal distribution and abundance, as this region is known to provide important habitat for seabirds year-round. We explore the utility of density surface modeling (DSM to improve seabird information available for regional conservation and management decision making. We, (1 develop seasonal density surface models for seabirds in the Labrador Sea using data from vessel-based surveys (2006–2014; 13,783 linear km of surveys, (2 present measures of uncertainty in model predictions, (3 discuss how density surface models can inform conservation and management decision making, and 4 explore challenges and potential pitfalls associated with using these modeling procedures. Models predicted large areas of high seabird density in fall over continental shelf waters (max. ~80 birds·km−2 driven largely by the southward migration of murres (Uria spp. and dovekies (Alle alle from Arctic breeding colonies. The continental shelf break was also highlighted as an important habitat feature, with predictions of high seabird densities particularly during summer (max. ~70 birds·km−2. Notable concentrations of seabirds overlapped with several significant hydrocarbon discoveries on the continental shelf and large areas in the vicinity of the southern shelf break, which are in the early stages of exploration. Some, but not all, areas of high seabird density were within current Ecologically and Biologically Significant Area (EBSA boundaries. Building predictive spatial models required knowledge of Distance Sampling and GAMs, and significant investments of time and computational power—resource needs that are becoming more

  2. Can we use the Jackson and Pollock equations to predict body density/fat of obese individuals in the 21st century?

    Science.gov (United States)

    Nevill, A M; Metsios, G S; Jackson, A S; Wang, J; Thornton, J; Gallagher, D

    2008-09-02

    OBJECTIVE: Jackson and Pollock's (JP) ground-breaking research reporting generalized body density equations to estimate body fat was carried out in the late 1970s. Since then we have experienced an 'obesity epidemic'. Our aim was to examine whether the original quadratic equations established by Jackson and co-workers are valid in the 21st century. METHODS: Reanalyzing the original JP data, an alternative, more biologically sound exponential power-function model for body density is proposed that declines monotonically, and hence predicts body fat to rise monotonically, with increasing skin-fold thicknesses. The model also remains positive irrespective of the subjects' sum-of-skinfold thicknesses or age. RESULTS: Compared to the original quadratic model proposed by JP, our alternative exponential power-function model is theoretically and empirically more accurate when predicting body fat of obese subjects (sums of skinfolds >120mm). A cross-validation study on 14 obese subjects confirmed these observations, when the JP quadratic equations under estimated body fat predicted using dual energy x-ray absorptiometry (DXA) by 2.1% whereas our exponential power-function model was found to underestimate body fat by less than 1.0%. Otherwise, the agreement between the DXA fat (%) and the two models were found to be almost identical, with both coefficients of variation being 10.2%. CONCLUSIONS: Caution should be exercised when predicting body fat using the JP quadratic equations for subjects with sums of skinfolds>120 mm. For these subjects, we recommend estimating body fat using the tables reported in the present manuscript, based on the more biologically sound and empirically valid exponential power-function model.

  3. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    DEFF Research Database (Denmark)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...

  4. Analysis of the IMAGE RPI electron density data and CHAMP plasmasphere electron density reconstructions with focus on plasmasphere modelling

    Science.gov (United States)

    Gerzen, T.; Feltens, J.; Jakowski, N.; Galkin, I.; Reinisch, B.; Zandbergen, R.

    2016-09-01

    The electron density of the topside ionosphere and the plasmasphere contributes essentially to the overall Total Electron Content (TEC) budget affecting Global Navigation Satellite Systems (GNSS) signals. The plasmasphere can cause half or even more of the GNSS range error budget due to ionospheric propagation errors. This paper presents a comparative study of different plasmasphere and topside ionosphere data aiming at establishing an appropriate database for plasmasphere modelling. We analyze electron density profiles along the geomagnetic field lines derived from the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite/Radio Plasma Imager (RPI) records of remote plasma sounding with radio waves. We compare these RPI profiles with 2D reconstructions of the topside ionosphere and plasmasphere electron density derived from GNSS based TEC measurements onboard the Challenging Minisatellite Payload (CHAMP) satellite. Most of the coincidences between IMAGE profiles and CHAMP reconstructions are detected in the region with L-shell between 2 and 5. In general the CHAMP reconstructed electron densities are below the IMAGE profile densities, with median of the CHAMP minus IMAGE residuals around -588 cm-3. Additionally, a comparison is made with electron densities derived from passive radio wave RPI measurements onboard the IMAGE satellite. Over the available 2001-2005 period of IMAGE measurements, the considered combined data from the active and passive RPI operations cover the region within a latitude range of ±60°N, all longitudes, and an L-shell ranging from 1.2 to 15. In the coincidence regions (mainly 2 ⩽ L ⩽ 4), we check the agreement between available active and passive RPI data. The comparison shows that the measurements are well correlated, with a median residual of ∼52 cm-3. The RMS and STD values of the relative residuals are around 22% and 21% respectively. In summary, the results encourage the application of IMAGE RPI data for

  5. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  6. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    Science.gov (United States)

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  7. Ligand field density functional theory for the prediction of future domestic lighting.

    Science.gov (United States)

    Ramanantoanina, Harry; Urland, Werner; García-Fuente, Amador; Cimpoesu, Fanica; Daul, Claude

    2014-07-28

    We deal with the computational determination of the electronic structure and properties of lanthanide ions in complexes and extended structures having open-shell f and d configurations. Particularly, we present conceptual and methodological issues based on Density Functional Theory (DFT) enabling the reliable calculation and description of the f → d transitions in lanthanide doped phosphors. We consider here the optical properties of the Pr(3+) ion embedded into various solid state fluoride host lattices, for the prospection and understanding of the so-called quantum cutting process, being important in the further quest of warm-white light source in light emitting diodes (LED). We use the conceptual formulation of the revisited ligand field (LF) theory, fully compatibilized with the quantum chemistry tools: LFDFT. We present methodological advances for the calculations of the Slater-Condon parameters, the ligand field interaction and the spin-orbit coupling constants, important in the non-empirical parameterization of the effective Hamiltonian adjusted from the ligand field theory. The model shows simple procedure using less sophisticated computational tools, which is intended to contribute to the design of modern phosphors and to help to complement the understanding of the 4f(n) → 4f(n-1)5d(1) transitions in any lanthanide system.

  8. Predicting carnivore occurrence with noninvasive surveys and occupancy modeling

    Science.gov (United States)

    Long, Robert A.; Donovan, Therese M.; MacKay, Paula; Zielinski, William J.; Buzas, Jeffrey S.

    2011-01-01

    Terrestrial carnivores typically have large home ranges and exist at low population densities, thus presenting challenges to wildlife researchers. We employed multiple, noninvasive survey methods—scat detection dogs, remote cameras, and hair snares—to collect detection–nondetection data for elusive American black bears (Ursus americanus), fishers (Martes pennanti), and bobcats (Lynx rufus) throughout the rugged Vermont landscape. We analyzed these data using occupancy modeling that explicitly incorporated detectability as well as habitat and landscape variables. For black bears, percentage of forested land within 5 km of survey sites was an important positive predictor of occupancy, and percentage of human developed land within 5 km was a negative predictor. Although the relationship was less clear for bobcats, occupancy appeared positively related to the percentage of both mixed forest and forested wetland habitat within 1 km of survey sites. The relationship between specific covariates and fisher occupancy was unclear, with no specific habitat or landscape variables directly related to occupancy. For all species, we used model averaging to predict occurrence across the study area. Receiver operating characteristic (ROC) analyses of our black bear and fisher models suggested that occupancy modeling efforts with data from noninvasive surveys could be useful for carnivore conservation and management, as they provide insights into habitat use at the regional and landscape scale without requiring capture or direct observation of study species.

  9. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    Science.gov (United States)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  10. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  11. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  12. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  13. Assessing climate model software quality: a defect density analysis of three models

    Directory of Open Access Journals (Sweden)

    J. Pipitone

    2012-08-01

    Full Text Available A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model, one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

  14. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  15. Plant physiological models of heat, water and photoinhibition stress for climate change modelling and agricultural prediction

    Science.gov (United States)

    Nicolas, B.; Gilbert, M. E.; Paw U, K. T.

    2015-12-01

    Soil-Vegetation-Atmosphere Transfer (SVAT) models are based upon well understood steady state photosynthetic physiology - the Farquhar-von Caemmerer-Berry model (FvCB). However, representations of physiological stress and damage have not been successfully integrated into SVAT models. Generally, it has been assumed that plants will strive to conserve water at higher temperatures by reducing stomatal conductance or adjusting osmotic balance, until potentially damaging temperatures and the need for evaporative cooling become more important than water conservation. A key point is that damage is the result of combined stresses: drought leads to stomatal closure, less evaporative cooling, high leaf temperature, less photosynthetic dissipation of absorbed energy, all coupled with high light (photosynthetic photon flux density; PPFD). This leads to excess absorbed energy by Photosystem II (PSII) and results in photoinhibition and damage, neither are included in SVAT models. Current representations of photoinhibition are treated as a function of PPFD, not as a function of constrained photosynthesis under heat or water. Thus, it seems unlikely that current models can predict responses of vegetation to climate variability and change. We propose a dynamic model of damage to Rubisco and RuBP-regeneration that accounts, mechanistically, for the interactions between high temperature, light, and constrained photosynthesis under drought. Further, these predictions are illustrated by key experiments allowing model validation. We also integrated this new framework within the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA). Preliminary results show that our approach can be used to predict reasonable photosynthetic dynamics. For instances, a leaf undergoing one day of drought stress will quickly decrease its maximum quantum yield of PSII (Fv/Fm), but it won't recover to unstressed levels for several days. Consequently, cumulative effect of photoinhibition on photosynthesis can cause

  16. Calibration plots for risk prediction models in the presence of competing risks

    DEFF Research Database (Denmark)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-01-01

    prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves......A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk...

  17. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  18. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  20. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  1. Density functional theory prediction of pKa for carboxylated single-wall carbon nanotubes and graphene

    Science.gov (United States)

    Li, Hao; Fu, Aiping; Xue, Xuyan; Guo, Fengna; Huai, Wenbo; Chu, Tianshu; Wang, Zonghua

    2017-06-01

    Density functional calculations have been performed to investigate the acidities for the carboxylated single-wall carbon nanotubes and graphene. The pKa values for different COOH-functionalized models with varying lengths, diameters and chirality of nanotubes and with different edges of graphene were predicted using the SMD/M05-2X/6-31G* method combined with two universal thermodynamic cycles. The effects of following factors, such as, the functionalized position of carboxyl group, the Stone-Wales and single vacancy defects, on the acidity of the functionalized nanotube and graphene have also been evaluated. The deprotonated species have undergone decarboxylation when the hybridization mode of the carbon atom at the functionalization site changed from sp2 to sp3 both for the tube and graphene. The knowledge of the pKa values of the carboxylated nanotube and graphene could be of great help for the understanding of the nanocarbon materials in many diverse areas, including environmental protection, catalysis, electrochemistry and biochemistry.

  2. Model-based estimators of density and connectivity to inform conservation of spatially structured populations

    Science.gov (United States)

    Morin, Dana J.; Fuller, Angela K.; Royle, J. Andrew; Sutherland, Chris

    2017-01-01

    Conservation and management of spatially structured populations is challenging because solutions must consider where individuals are located, but also differential individual space use as a result of landscape heterogeneity. A recent extension of spatial capture–recapture (SCR) models, the ecological distance model, uses spatial encounter histories of individuals (e.g., a record of where individuals are detected across space, often sequenced over multiple sampling occasions), to estimate the relationship between space use and characteristics of a landscape, allowing simultaneous estimation of both local densities of individuals across space and connectivity at the scale of individual movement. We developed two model-based estimators derived from the SCR ecological distance model to quantify connectivity over a continuous surface: (1) potential connectivity—a metric of the connectivity of areas based on resistance to individual movement; and (2) density-weighted connectivity (DWC)—potential connectivity weighted by estimated density. Estimates of potential connectivity and DWC can provide spatial representations of areas that are most important for the conservation of threatened species, or management of abundant populations (i.e., areas with high density and landscape connectivity), and thus generate predictions that have great potential to inform conservation and management actions. We used a simulation study with a stationary trap design across a range of landscape resistance scenarios to evaluate how well our model estimates resistance, potential connectivity, and DWC. Correlation between true and estimated potential connectivity was high, and there was positive correlation and high spatial accuracy between estimated DWC and true DWC. We applied our approach to data collected from a population of black bears in New York, and found that forested areas represented low levels of resistance for black bears. We demonstrate that formal inference about measures

  3. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae habitat and population densities

    Directory of Open Access Journals (Sweden)

    Khalifa M. Al-Kindi

    2017-08-01

    Full Text Available In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  4. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  5. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  6. Population Density Modeling for Diverse Land Use Classes: Creating a National Dasymetric Worker Population Model

    Science.gov (United States)

    Trombley, N.; Weber, E.; Moehl, J.

    2017-12-01

    Many studies invoke dasymetric mapping to make more accurate depictions of population distribution by spatially restricting populations to inhabited/inhabitable portions of observational units (e.g., census blocks) and/or by varying population density among different land classes. LandScan USA uses this approach by restricting particular population components (such as residents or workers) to building area detected from remotely sensed imagery, but also goes a step further by classifying each cell of building area in accordance with ancillary land use information from national parcel data (CoreLogic, Inc.'s ParcelPoint database). Modeling population density according to land use is critical. For instance, office buildings would have a higher density of workers than warehouses even though the latter would likely have more cells of detection. This paper presents a modeling approach by which different land uses are assigned different densities to more accurately distribute populations within them. For parts of the country where the parcel data is insufficient, an alternate methodology is developed that uses National Land Cover Database (NLCD) data to define the land use type of building detection. Furthermore, LiDAR data is incorporated for many of the largest cities across the US, allowing the independent variables to be updated from two-dimensional building detection area to total building floor space. In the end, four different regression models are created to explain the effect of different land uses on worker distribution: A two-dimensional model using land use types from the parcel data A three-dimensional model using land use types from the parcel data A two-dimensional model using land use types from the NLCD data, and A three-dimensional model using land use types from the NLCD data. By and large, the resultant coefficients followed intuition, but importantly allow the relationships between different land uses to be quantified. For instance, in the model

  7. Hadron masses at finite density from the Zimanyi-Moskowski model

    International Nuclear Information System (INIS)

    Bhattacharyya, A.; Raha, S.

    1996-01-01

    The density dependence of hadron masses has been calculated from different versions of the Zimanyi-Moskowski (ZM) model and the results have been compared with the Walecka model. The ZM model has been extended to include pions. The meson masses have been calculated in the random phase approximation self-consistently. The σ, ω, and π masses are found to increase with density, as also in the Walecka model; interestingly, it has been found that the abnormal increase of pion mass with density, which is found in the Walecka model, can be avoided in the ZM models. copyright 1996 The American Physical Society

  8. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  9. Mathematical modelling and simulation of variable-density fluidized bed reactors with generalised nonlinear kinetics

    Science.gov (United States)

    Moradi Tafreshi, Zahra

    1999-10-01

    Fluidized bed reactor is widely used in the chemical, petroleum and biological processing industries for a variety of operations. Due to the complex fluidodynamics, conventional designs are often based on the assumption of constant reaction volume and first order kinetics. Most industrial catalytic reactions, however, occur in a variable-density environment and undergo nonmonotone kinetics. This thesis deals with those complexities. Two complex models, namely 2-phase and 3-phase models, were employed for the prediction of reactor performance. Four general types of reversible reactions with nonlinear power rate law kinetics were considered and the influence of density parameter, ɛ, and reaction orders on reactor behaviour were explored for each type. Computer programs, written in Matlab, were provided for each type of reaction. The simulation results of both models showed that the reaction density parameter has a significant effect on both fluidodynamic characteristics and reaction conversion. Generally, in all types higher values of fluidodynamic variables were obtained when ɛ >= 0. Reaction conversion, however, dropped as the expansion factor increased. This trend, which was more pronounced for reaction orders higher than unity, has been attributed to the ``membranous effect'' of the bubble-emulsion interface that permits a continuous supply of fresh reactants from bubble phase into the emulsion phase in contracting gas systems. In expanding reaction systems, however, the extra moles caused an increase in the bubble size and velocity which reduced the chances of good contact between the two phases. This suggests that fluidized operations are probably not optimal and applicable for certain types of reactions. Moreover, the results showed that simple first order reactions exhibit higher conversions than complex reactions with nonlinear kinetics. 3-phase model, on the other hand, predicted the possibility of multiple steady states for reactions with a decrease in

  10. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  11. Forecasting the density of oil futures returns using model-free implied volatility and high-frequency data

    International Nuclear Information System (INIS)

    Ielpo, Florian; Sevi, Benoit

    2013-09-01

    Forecasting the density of returns is useful for many purposes in finance, such as risk management activities, portfolio choice or derivative security pricing. Existing methods to forecast the density of returns either use prices of the asset of interest or option prices on this same asset. The latter method needs to convert the risk-neutral estimate of the density into a physical measure, which is computationally cumbersome. In this paper, we take the view of a practitioner who observes the implied volatility under the form of an index, namely the recent OVX, to forecast the density of oil futures returns for horizons going from 1 to 60 days. Using the recent methodology in Maheu and McCurdy (2011) to compute density predictions, we compare the performance of time series models using implied volatility and either daily or intra-daily futures prices. Our results indicate that models based on implied volatility deliver significantly better density forecasts at all horizons, which is in line with numerous studies delivering the same evidence for volatility point forecast. (authors)

  12. Measurements and IRI Model Predictions During the Recent Solar Minimum

    Science.gov (United States)

    Bilitza, Dieter; Brown, Steven A.; Wang, Mathew Y.; Souza, Jonas R.; Roddy, Patrick A.

    2012-01-01

    Cycle 23 was exceptional in that it lasted almost two years longer than its predecessors and in that it ended in an extended minimum period that proved all predictions wrong. Comparisons of the International Reference Ionosphere (IRI) with CHAMP and GRACE in-situ measurements of electron density during the minimum have revealed significant discrepancies at 400-500 km altitude. Our study investigates the causes for these discrepancies with the help of ionosonde and Planar Langmuir Probe (PLP) data from the Communications/Navigation Outage Forecasting System (C/NOFS) satellite. Our C/NOFS comparisons confirm the earlier CHAMP and GRACE results. But the ionosonde measurements of the F-peak plasma frequency (foF2) show generally good agreement throughout the whole solar cycle. At mid-latitude stations yearly averages of the data-model difference are within 10% and at low latitudes stations within 20%. The 60-70% differences found at 400-500 km altitude are not seen at the F peak. We will discuss how these seemingly contradicting results from the ionosonde and in situ data-model comparisons can be explained and which parameters need to be corrected in the IRI model.

  13. Role of bone mineral density in predicting morphometric vertebral fractures in patients with HIV infection.

    Science.gov (United States)

    Porcelli, T; Gotti, D; Cristiano, A; Maffezzoni, F; Mazziotti, G; Focà, E; Castelli, F; Giustina, A; Quiros-Roldan, E

    2014-09-01

    This study investigated the bone of HIV patients both in terms of quantity and quality. It was found that HIV-infected patients did fracture independently of the degree of bone demineralization as in other forms of secondary osteoporosis. We aimed to determine the prevalence of vertebral fractures (VFs) in HIV patients who were screened by bone mineral density (BMD) and to explore possible factors associated with VFs. This is a cross-sectional study that included HIV-infected patients recruited in the Clinic of Infectious and Tropical Diseases and that underwent BMD measurement by dual-energy X-ray absorptiometry (DXA) at the lumbar spine and hip (Lunar Prodigy, GE Healthcare). For the assessment of VFs, anteroposterior and lateral X-ray examinations of the thoracic and lumbar spines were performed and were centrally digitized. Logistic regression models were used in the statistical analysis of factors associated with VFs. One hundred thirty-one consecutive patients with HIV infection (93 M, 38 F, median age 51 years; range, 36-75) underwent BMD measurement: 25.2 % of patients showed normal BMD, while 45 % were osteopenic and 29.7 % osteoporotic. Prevalence of low BMD (osteopenia and osteoporosis) was higher in females as compared to males (90 vs 69 %) with no significant correlation with age and body mass index. VFs occurred more frequently in patients with low BMD as compared to patients with normal BMD (88.5 vs. 11.4 %; p osteoporosis (43 vs. 46 %; p = 0.073). VFs were significantly associated with older age and previous AIDS events. These results suggest a BMD patients at risk of skeletal fragility and, therefore, good candidates for morphometric evaluation of spine X-ray in line with other forms of secondary osteoporosis with impaired bone quality.

  14. Voxel-based morphometry predicts shifts in dendritic spine density and morphology with auditory fear conditioning

    Science.gov (United States)

    Keifer Jr, O. P.; Hurt, R. C.; Gutman, D. A.; Keilholz, S. D.; Gourley, S. L.; Ressler, K. J.

    2015-01-01

    Neuroimaging has provided compelling data about the brain. Yet the underlying mechanisms of many neuroimaging techniques have not been elucidated. Here we report a voxel-based morphometry (VBM) study of Thy1-YFP mice following auditory fear conditioning complemented by confocal microscopy analysis of cortical thickness, neuronal morphometric features and nuclei size/density. Significant VBM results included the nuclei of the amygdala, the insula and the auditory cortex. There were no significant VBM changes in a control brain area. Focusing on the auditory cortex, confocal analysis showed that fear conditioning led to a significantly increased density of shorter and wider dendritic spines, while there were no spine differences in the control area. Of all the morphology metrics studied, the spine density was the only one to show significant correlation with the VBM signal. These data demonstrate that learning-induced structural changes detected by VBM may be partially explained by increases in dendritic spine density. PMID:26151911

  15. Analytical thermal modelling of multilayered active embedded chips into high density electronic board

    Directory of Open Access Journals (Sweden)

    Monier-Vinard Eric

    2013-01-01

    Full Text Available The recent Printed Wiring Board embedding technology is an attractive packaging alternative that allows a very high degree of miniaturization by stacking multiple layers of embedded chips. This disruptive technology will further increase the thermal management challenges by concentrating heat dissipation at the heart of the organic substrate structure. In order to allow the electronic designer to early analyze the limits of the power dissipation, depending on the embedded chip location inside the board, as well as the thermal interactions with other buried chips or surface mounted electronic components, an analytical thermal modelling approach was established. The presented work describes the comparison of the analytical model results with the numerical models of various embedded chips configurations. The thermal behaviour predictions of the analytical model, found to be within ±10% of relative error, demonstrate its relevance for modelling high density electronic board. Besides the approach promotes a practical solution to study the potential gain to conduct a part of heat flow from the components towards a set of localized cooled board pads.

  16. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    International Nuclear Information System (INIS)

    Gulisashvili, Archil; Stein, Elias M.

    2010-01-01

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  17. Bioinorganic Chemistry Modeled with the TPSSh Density Functional

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2008-01-01

    In this work, the TPSSh density functional has been benchmarked against a test set of experimental structures and bond energies for 80 transition-metal-containing diatomics. It is found that the TPSSh functional gives structures of the same quality as other commonly used hybrid and nonhybrid...... functionals such as B3LYP and BP86. TPSSh gives a slope of 0.99 upon linear fitting to experimental bond energies, whereas B3LYP and BP86, representing 20% and 0% exact exchange, respectively, give linear fits with slopes of 0.91 and 1.07. Thus, TPSSh eliminates the large systematic component of the error...... promising density functional for use and further development within the field of bioinorganic chemistry....

  18. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  19. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser

    2010-09-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  20. Statistical models for predicting pair dispersion and particle clustering in isotropic turbulence and their applications

    International Nuclear Information System (INIS)

    Zaichik, Leonid I; Alipchenkov, Vladimir M

    2009-01-01

    The purpose of this paper is twofold: (i) to advance and extend the statistical two-point models of pair dispersion and particle clustering in isotropic turbulence that were previously proposed by Zaichik and Alipchenkov (2003 Phys. Fluids15 1776-87; 2007 Phys. Fluids 19, 113308) and (ii) to present some applications of these models. The models developed are based on a kinetic equation for the two-point probability density function of the relative velocity distribution of two particles. These models predict the pair relative velocity statistics and the preferential accumulation of heavy particles in stationary and decaying homogeneous isotropic turbulent flows. Moreover, the models are applied to predict the effect of particle clustering on turbulent collisions, sedimentation and intensity of microwave radiation as well as to calculate the mean filtered subgrid stress of the particulate phase. Model predictions are compared with direct numerical simulations and experimental measurements.

  1. A dynamo theory prediction for solar cycle 22: Sunspot number, radio flux, exospheric temperature, and total density at 400 km

    Science.gov (United States)

    Schatten, K. H.; Hedin, A. E.

    1986-01-01

    Using the dynamo theory method to predict solar activity, a value for the smoothed sunspot number of 109 + or - 20 is obtained for solar cycle 22. The predicted cycle is expected to peak near December, 1990 + or - 1 year. Concommitantly, F(10.7) radio flux is expected to reach a smoothed value of 158 + or - 18 flux units. Global mean exospheric temperature is expected to reach 1060 + or - 50 K and global total average total thermospheric density at 400 km is expected to reach 4.3 x 10 to the -15th gm/cu cm + or - 25 percent.

  2. A dynamo theory prediction for solar cycle 22 - Sunspot number, radio flux, exospheric temperature, and total density at 400 km

    Science.gov (United States)

    Schatten, K. H.; Hedin, A. E.

    1984-01-01

    Using the 'dynamo theory' method to predict solar activity, a value for the smoothed sunspot number of 109 + or - 20 is obtained for solar cycle 22. The predicted cycle is expected to peak near December, 1990 + or - 1 year. Concommitantly, F(10.7) radio flux is expected to reach a smoothed value of 158 + or - 18 flux units. Global mean exospheric temperature is expected to reach 1060 + or - 50 K and global total average total thermospheric density at 400 km is expected to reach 4.3 x 10 to the -15th gm/cu cm + or - 25 percent.

  3. Heart Rate Variability Density Analysis (Dyx) and Prediction of Long-Term Mortality after Acute Myocardial Infarction

    DEFF Research Database (Denmark)

    Jørgensen, Rikke Mørch; Abildstrøm, Steen Z; Levitan, Jacob

    2016-01-01

    AIMS: The density HRV parameter Dyx is a new heart rate variability (HRV) measure based on multipole analysis of the Poincaré plot obtained from RR interval time series, deriving information from both the time and frequency domain. Preliminary results have suggested that the parameter may provide...... of mortality (P = 0.02). Reduced Dyx also predicted cardiovascular death (P cardiovascular death (P = 0.05). In Kaplan-Meier analysis, Dyx significantly predicted mortality in patients both with and without impaired left ventricular systolic function (P

  4. Stability analysis of a new lattice hydrodynamic model by considering lattice's self-anticipative density effect

    Science.gov (United States)

    Zhang, Geng; Sun, Di-Hua; Liu, Hui; Chen, Dong

    2017-11-01

    In this paper, a new lattice hydrodynamic model with consideration of the density difference of a lattice's current density and its anticipative density is proposed. The influence of lattice's self-anticipative density on traffic stability is revealed through linear stability theory and it shows that lattice's self-anticipative density can improve the stability of traffic flow. To describe the phase transition of traffic flow, the mKdV equation near the critical point is derived by using nonlinear analysis method. The propagating behavior of density wave in the unstable region can be described by the kink-antikink soliton of the mKdV equation. Numerical simulation validates the analytical results, which shows that traffic jam can be suppressed efficiently by considering lattice's self-anticipative density in the modified lattice hydrodynamic model.

  5. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  6. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  7. Numerical modelling of seawater intrusion in Shenzhen (China) using a 3D density-dependent model including tidal effects

    Science.gov (United States)

    Lu, Wei; Yang, Qingchun; Martín, Jordi D.; Juncosa, Ricardo

    2013-04-01

    During the 1990s, groundwater overexploitation has resulted in seawater intrusion in the coastal aquifer of the Shenzhen city, China. Although water supply facilities have been improved and alleviated seawater intrusion in recent years, groundwater overexploitation is still of great concern in some local areas. In this work we present a three-dimensional density-dependent numerical model developed with the FEFLOW code, which is aimed at simulating the extent of seawater intrusion while including tidal effects and different groundwater pumping scenarios. Model calibration, using waterheads and reported chloride concentration, has been performed based on the data from 14 boreholes, which were monitored from May 2008 to December 2009. A fairly good fitness between the observed and computed values was obtained by a manual trial-and-error method. Model prediction has been carried out forward 3 years with the calibrated model taking into account high, medium and low tide levels and different groundwater exploitation schemes. The model results show that tide-induced seawater intrusion significantly affects the groundwater levels and concentrations near the estuarine of the Dasha river, which implies that an important hydraulic connection exists between this river and groundwater, even considering that some anti-seepage measures were taken in the river bed. Two pumping scenarios were considered in the calibrated model in order to predict the future changes in the water levels and chloride concentration. The numerical results reveal a decreased tendency of seawater intrusion if groundwater exploitation does not reach an upper bound of about 1.32 × 104 m3/d. The model results provide also insights for controlling seawater intrusion in such coastal aquifer systems.

  8. Nuclear interaction potential in a folded-Yukawa model with diffuse densities

    International Nuclear Information System (INIS)

    Randrup, J.

    1975-09-01

    The folded-Yukawa model for the nuclear interaction potential is generalized to diffuse density distributions which are generated by folding a Yukawa function into sharp generating distributions. The effect of a finite density diffuseness or of a finite interaction range is studied. The Proximity Formula corresponding to the generalized model is derived and numerical comparison is made with the exact results. (8 figures)

  9. Divisive Latent Class Modeling as a Density Estimation Method for Categorical Data

    NARCIS (Netherlands)

    van der Palm, D.W.; van der Ark, L.A.; Vermunt, J.K.

    Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation

  10. Divisive latent class modeling as a density estimation method for categorical data

    NARCIS (Netherlands)

    van der Palm, D.W.; van der Ark, L.A.; Vermunt, J.K.

    2016-01-01

    Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation

  11. Age as a predictive factor of mammographic breast density in Jamaican women

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Deanne; Reid, Marvin; James, Michael

    2002-06-01

    AIM: We sought to determine the relationship between age, and other clinical characteristics such as parity, oestrogen use, dietary factors and menstrual history on breast density in Jamaican women. METHODS AND MATERIALS: A retrospective study was done of 891 patients who attended the breast imaging unit. The clinical characteristics were extracted from the patient records. Mammograms were assessed independently by two radiologists who were blinded to the patient clinical characteristics. Breast densities were assigned using the American College of Radiology (ACR) classification. RESULTS: The concordance between the ACR classification of breast density between the two independent radiologists was 92% with k = 0.76 (SE = 0.02, P < 0.001). Women with low breast density were heavier (81.3 {+-} 15.5 kg vs 68.4 {+-} 14.3 kg,P < 0.0001, mean {+-} standard deviation (SD)) and more obese (body mass index (BMI), 30.3 {+-} 5.8 kg m{sup -2} vs 26.0 {+-} 5.2 kg m{sup -2}, P < 0.0001). Mammographic breast density decreased with age. The age adjusted odds ratios (ORs) for predictors significantly related to high breast density were parity, OR = 0.79 (95%CI:0.71, 0.88), weight, OR = 0.92 (95% CI:0.91, 0.95), BMI, OR = 0.83 (95% CI:0.78, 0.89), menopause, OR = 0.51 (95% CI:0.36, 0.74) and a history of previous breast surgery, OR 1.6 (95% CI:1.1, 2.3). CONCLUSION: The rate decline of breast density with age in our population was influenced by parity and body composition. Soares, D. et al. (2002)

  12. Early changes of parotid density and volume predict modifications at the end of therapy and intensity of acute xerostomia

    International Nuclear Information System (INIS)

    Belli, Maria Luisa; Broggi, Sara; Scalco, Elisa; Rizzo, Giovanna; Sanguineti, Giuseppe; Fiorino, Claudio; Cattaneo, Giovanni Mauro; Dinapoli, Nicola; Valentini, Vincenzo; Ricchetti, Francesco

    2014-01-01

    To quantitatively assess the predictive power of early variations of parotid gland volume and density on final changes at the end of therapy and, possibly, on acute xerostomia during IMRT for head-neck cancer. Data of 92 parotids (46 patients) were available. Kinetics of the changes during treatment were described by the daily rate of density (rΔρ) and volume (rΔvol) variation based on weekly diagnostic kVCT images. Correlation between early and final changes was investigated as well as the correlation with prospective toxicity data (CTCAEv3.0) collected weekly during treatment for 24/46 patients. A higher rΔρ was observed during the first compared to last week of treatment (-0,50 vs -0,05HU, p-value = 0.0001). Based on early variations, a good estimation of the final changes may be obtained (Δρ: AUC = 0.82, p = 0.0001; Δvol: AUC = 0.77, p = 0.0001). Both early rΔρ and rΔvol predict a higher ''mean'' acute xerostomia score (≥ median value, 1.57; p-value = 0.01). Median early density rate changes for patients with mean xerostomia score ≥ / 3 /day for rΔρ and rΔvol respectively. Further studies are necessary to definitively assess the potential of early density/volume changes in identifying more sensitive patients at higher risk of experiencing xerostomia. (orig.) [de

  13. Joint density of eigenvalues in spiked multivariate models.

    Science.gov (United States)

    Dharmawansa, Prathapasinghe; Johnstone, Iain M

    2014-01-01

    The classical methods of multivariate analysis are based on the eigenvalues of one or two sample covariance matrices. In many applications of these methods, for example to high dimensional data, it is natural to consider alternative hypotheses which are a low rank departure from the null hypothesis. For rank one alternatives, this note provides a representation for the joint eigenvalue density in terms of a single contour integral. This will be of use for deriving approximate distributions for likelihood ratios and 'linear' statistics used in testing.

  14. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  15. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  16. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  17. A novel unified dislocation density-based model for hot deformation behavior of a nickel-based superalloy under dynamic recrystallization conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Y.C. [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); Light Alloy Research Institute of Central South University, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Wen, Dong-Xu; Chen, Xiao-Min [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); Chen, Ming-Song [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China)

    2016-09-15

    In this study, a novel unified dislocation density-based model is presented for characterizing hot deformation behaviors in a nickel-based superalloy under dynamic recrystallization (DRX) conditions. In the Kocks-Mecking model, a new softening item is proposed to represent the impacts of DRX behavior on dislocation density evolution. The grain size evolution and DRX kinetics are incorporated into the developed model. Material parameters of the developed model are calibrated by a derivative-free method of MATLAB software. Comparisons between experimental and predicted results confirm that the developed unified dislocation density-based model can nicely reproduce hot deformation behavior, DRX kinetics, and grain size evolution in wide scope of initial grain size, strain rate, and deformation temperature. Moreover, the developed unified dislocation density-based model is well employed to analyze the time-variant forming processes of the studied superalloy. (orig.)

  18. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  19. Product distribution modelling in the thermal pyrolysis of high density polyethylene

    International Nuclear Information System (INIS)

    Elordi, G.; Lopez, G.; Olazar, M.; Aguado, R.; Bilbao, J.

    2007-01-01

    The thermal fast pyrolysis of high density polyethylene (HDPE) has been carried out in a conical spouted bed reactor in the 450-715 deg. C range, and individual products have been monitored with the aim of obtaining kinetic data for the design and simulation of this process at large scale. Kinetic schemes have been proposed in order to explain both the results obtained in the laboratory plant and those obtained in the literature by other authors operating at laboratory and larger scale. Discrimination has been carried out based on the contribution of the variance of model parameters (stepwise regression) to the total variance explained by the model. The models based on that of Westerhout et al. [R.W.J. Westerhout, J. Waanders, W.P.M. Van Swaaij, Recycling of polyethene and polypropene in a novel bench-scale rotating cone reactor by high-temperature pyrolysis. Ind. Eng. Chem. Res. 37 (6) (1998) 2293-2300] do not adequately predict the experimental results, especially those corresponding to aromatics and char, which is probably due to the very short residence times attained in the conical spouted bed and, consequently, to the lower yields of aromatics and char. The model of best fit is the one where polyethylene degrades to give gas, liquid (oil) and wax fractions. Furthermore, the latter undergoes secondary reactions to give liquid and aromatics, which in turn produce more char

  20. Molecular weight​/branching distribution modeling of low-​density-​polyethylene accounting for topological scission and combination termination in continuous stirred tank reactor

    NARCIS (Netherlands)

    Yaghini, N.; Iedema, P.D.

    2014-01-01

    We present a comprehensive model to predict the molecular weight distribution (MWD),(1) and branching distribution of low-density polyethylene (IdPE),(2) for free radical polymerization system in a continuous stirred tank reactor (CSTR).(3) The model accounts for branching, by branching moment or

  1. Modeling and Validation of the Thermal Response of TDI Encapsulating Foam as a function of Initial Density

    Energy Technology Data Exchange (ETDEWEB)

    Dodd, Amanda B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Larsen, Marvin E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    TDI foams of nominal density from 10 to 45 pound per cubic foot were decomposed within a heated stainless steel container. The pressure in the container and temperatures measured by thermocouples were recorded with each test proceeding to an allowed maximum pressure before venting. Two replicate tests for each of four densities and two orientations in gravity produced very consistent pressure histories. Some thermal responses demonstrate random sudden temperature increases due to decomposition product movement. The pressurization of the container due to the generation of gaseous products is more rapid for denser foams. When heating in the inverted orientation, where gravity is in the opposite direction of the applied heat flux, the liquefied decomposition products move towards the heated plate and the pressure rises more rapidly than in the upright configuration. This effect is present at all the densities tested but becomes more pronounced as density of the foam is decreased. A thermochemical material model implemented in a transient conduction model solved with the finite element method was compared to the test data. The expected uncertainty of the model was estimated using the mean value method and importance factors for the uncertain parameters were estimated. The model that was assessed does not consider the effect of liquefaction or movement of gases. The result of the comparison is that the model uncertainty estimates do not account for the variation in orientation (no gravitational affects are in the model) and therefore the pressure predictions are not distinguishable due to orientation. Temperature predictions were generally in good agreement with the experimental data. Predictions for response locations on the outside of the can benefit from reliable estimates associated with conduction in the metal. For the lighter foams, temperatures measured on the embedded component fall well with the estimated uncertainty intervals indicating the energy transport

  2. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  3. Modeling effects of overstory density and competing vegetation on tree height growth

    Science.gov (United States)

    Christian Salas; Albert R. Stage; Andrew P. Robinson

    2007-01-01

    We developed and evaluated an individual-tree height growth model for Douglas-fir [Pseudotsuga menziesii (Mirbel) Franco] in the Inland Northwest United States. The model predicts growth for all tree sizes continuously, rather than requiring a transition between independent models for juvenile and mature growth phases. The model predicts the effects...

  4. ON GALACTIC DENSITY MODELING IN THE PRESENCE OF DUST EXTINCTION

    International Nuclear Information System (INIS)

    Bovy, Jo; Rix, Hans-Walter; Schlafly, Edward F.; Green, Gregory M.; Finkbeiner, Douglas P.

    2016-01-01

    Inferences about the spatial density or phase-space structure of stellar populations in the Milky Way require a precise determination of the effective survey volume. The volume observed by surveys such as Gaia or near-infrared spectroscopic surveys, which have good coverage of the Galactic midplane region, is highly complex because of the abundant small-scale structure in the three-dimensional interstellar dust extinction. We introduce a novel framework for analyzing the importance of small-scale structure in the extinction. This formalism demonstrates that the spatially complex effect of extinction on the selection function of a pencil-beam or contiguous sky survey is equivalent to a low-pass filtering of the extinction-affected selection function with the smooth density field. We find that the angular resolution of current 3D extinction maps is sufficient for analyzing Gaia sub-samples of millions of stars. However, the current distance resolution is inadequate and needs to be improved by an order of magnitude, especially in the inner Galaxy. We also present a practical and efficient method for properly taking the effect of extinction into account in analyses of Galactic structure through an effective selection function. We illustrate its use with the selection function of red-clump stars in APOGEE using and comparing a variety of current 3D extinction maps

  5. ON GALACTIC DENSITY MODELING IN THE PRESENCE OF DUST EXTINCTION

    Energy Technology Data Exchange (ETDEWEB)

    Bovy, Jo [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON, M5S 3H4 (Canada); Rix, Hans-Walter; Schlafly, Edward F. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Green, Gregory M.; Finkbeiner, Douglas P., E-mail: bovy@astro.utoronto.ca [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2016-02-20

    Inferences about the spatial density or phase-space structure of stellar populations in the Milky Way require a precise determination of the effective survey volume. The volume observed by surveys such as Gaia or near-infrared spectroscopic surveys, which have good coverage of the Galactic midplane region, is highly complex because of the abundant small-scale structure in the three-dimensional interstellar dust extinction. We introduce a novel framework for analyzing the importance of small-scale structure in the extinction. This formalism demonstrates that the spatially complex effect of extinction on the selection function of a pencil-beam or contiguous sky survey is equivalent to a low-pass filtering of the extinction-affected selection function with the smooth density field. We find that the angular resolution of current 3D extinction maps is sufficient for analyzing Gaia sub-samples of millions of stars. However, the current distance resolution is inadequate and needs to be improved by an order of magnitude, especially in the inner Galaxy. We also present a practical and efficient method for properly taking the effect of extinction into account in analyses of Galactic structure through an effective selection function. We illustrate its use with the selection function of red-clump stars in APOGEE using and comparing a variety of current 3D extinction maps.

  6. On Galactic Density Modeling in the Presence of Dust Extinction

    Science.gov (United States)

    Bovy, Jo; Rix, Hans-Walter; Green, Gregory M.; Schlafly, Edward F.; Finkbeiner, Douglas P.

    2016-02-01

    Inferences about the spatial density or phase-space structure of stellar populations in the Milky Way require a precise determination of the effective survey volume. The volume observed by surveys such as Gaia or near-infrared spectroscopic surveys, which have good coverage of the Galactic midplane region, is highly complex because of the abundant small-scale structure in the three-dimensional interstellar dust extinction. We introduce a novel framework for analyzing the importance of small-scale structure in the extinction. This formalism demonstrates that the spatially complex effect of extinction on the selection function of a pencil-beam or contiguous sky survey is equivalent to a low-pass filtering of the extinction-affected selection function with the smooth density field. We find that the angular resolution of current 3D extinction maps is sufficient for analyzing Gaia sub-samples of millions of stars. However, the current distance resolution is inadequate and needs to be improved by an order of magnitude, especially in the inner Galaxy. We also present a practical and efficient method for properly taking the effect of extinction into account in analyses of Galactic structure through an effective selection function. We illustrate its use with the selection function of red-clump stars in APOGEE using and comparing a variety of current 3D extinction maps.

  7. Comparison between the double folding model and the energy density approach for calculating the ion-ion potential

    International Nuclear Information System (INIS)

    Ismail, M.; Osman, M.; Guirguis, J.W.; Ramadan, Kh.A.; Zahra, H.A.

    1989-01-01

    In the present work, we use the energy density formalism derived from both the conventional Skyrme force with parameter sets SI, SII and SIII together with the extended Skyrme force with parameters SKE1, SKE2, SKE3 and SKE4 to study the real part of the ion-ion potential between different pairs of nuclei. We have first modified the Skyrme energy density to include the energy dependence of the ion-ion potential. Then we have calculated the interaction potential between different pairs of nuclei at the strong absorption radius. We have compared our results with those deduced from experiment and with the predictions of the double folding model with M3Y force. We found that our results, obtained using a suitable approximation of the kinetic energy density, agree satisfactorily with experiment. The agreement is better than the agreement found in other papers. (author)

  8. Modelling the Effect of Weave Structure and Fabric Thread Density on Mechanical and Comfort Properties of Woven Fabrics

    Directory of Open Access Journals (Sweden)

    Maqsood Muhammad

    2016-09-01

    Full Text Available The paper investigates the effects of weave structure and fabric thread density on the comfort and mechanical properties of various test fabrics woven from polyester/cotton yarns. Three different weave structures, that is, 1/1 plain, 2/1 twill and 3/1 twill, and three different fabric densities were taken as input variables whereas air permeability, overall moisture management capacity, tensile strength and tear strength of fabrics were taken as response variables and a comparison is made of the effect of weave structure and fabric density on the response variables. The results of fabric samples were analysed in Minitab statistical software. The coefficients of determinations (R-sq values of the regression equations show a good predictive ability of the developed statistical models. The findings of the study may be helpful in deciding appropriate manufacturing specifications of woven fabrics to attain specific comfort and mechanical properties.

  9. Multivariate power-law models for streamflow prediction in the Mekong Basin

    Directory of Open Access Journals (Sweden)

    Guillaume Lacombe

    2014-11-01

    New hydrological insights for the region: A combination of 3–6 explanatory variables – chosen among annual rainfall, drainage area, perimeter, elevation, slope, drainage density and latitude – is sufficient to predict a range of flow metrics with a prediction R-squared ranging from 84 to 95%. The inclusion of forest or paddy percentage coverage as an additional explanatory variable led to slight improvements in the predictive power of some of the low-flow models (lowest prediction R-squared = 89%. A physical interpretation of the model structure was possible for most of the resulting relationships. Compared to regional regression models developed in other parts of the world, this new set of equations performs reasonably well.

  10. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  11. Using machine learning to predict soil bulk density on the basis of visual parameters

    NARCIS (Netherlands)

    Bondi, Giulia; Creamer, Rachel; Ferrari, Alessio; Fenton, Owen; Wall, David

    2018-01-01

    Soil structure is a key factor that supports all soil functions. Extracting intact soil cores and horizon specific samples for determination of soil physical parameters (e.g. bulk density (Bd) or particle size distribution) is a common practice for assessing indicators of soil structure. However,

  12. Tooth counts do not predict bone mineral density in early postmenopausal Caucasian women. EPIC study group

    DEFF Research Database (Denmark)

    Earnshaw, S A; Keating, N; Hosking, D J

    1998-01-01

    -centre trial. METHODS: Subjects were recruited at four study centres, using population-based techniques. Bone mineral density (BMD) at the lumbar spine and proximal femur was measured by dual energy x-ray absorptiometry (DXA) (Hologic QDR 2000). A full physical examination was performed including a tooth count...

  13. Comparison of Clinical and Automated Breast Density Measurements: Implications for Risk Prediction and Supplemental Screening

    Science.gov (United States)

    Brandt, Kathleen R.; Scott, Christopher G.; Ma, Lin; Mahmoudzadeh, Amir P.; Jensen, Matthew R.; Whaley, Dana H.; Wu, Fang Fang; Malkov, Serghei; Hruska, Carrie B.; Norman, Aaron D.; Heine, John; Shepherd, John; Pankratz, V. Shane; Kerlikowske, Karla

    2016-01-01

    Purpose To compare the classification of breast density with two automated methods, Volpara (version 1.5.0; Matakina Technology, Wellington, New Zealand) and Quantra (version 2.0; Hologic, Bedford, Mass), with clinical Breast Imaging Reporting and Data System (BI-RADS) density classifications and to examine associations of these measures with breast cancer risk. Materials and Methods In this study, 1911 patients with breast cancer and 4170 control subjects matched for age, race, examination date, and mammography machine were evaluated. Participants underwent mammography at Mayo Clinic or one of four sites within the San Francisco Mammography Registry between 2006 and 2012 and provided informed consent or a waiver for research, in compliance with HIPAA regulations and institutional review board approval. Digital mammograms were retrieved a mean of 2.1 years (range, 6 months to 6 years) before cancer diagnosis, with the corresponding clinical BI-RADS density classifications, and Volpara and Quantra density estimates were generated. Agreement was assessed with weighted κ statistics among control subjects. Breast cancer associations were evaluated with conditional logistic regression, adjusted for age and body mass index. Odds ratios, C statistics, and 95% confidence intervals (CIs) were estimated. Results Agreement between clinical BI-RADS density classifications and Volpara and Quantra BI-RADS estimates was moderate, with κ values of 0.57 (95% CI: 0.55, 0.59) and 0.46 (95% CI: 0.44, 0.47), respectively. Differences of up to 14% in dense tissue classification were found, with Volpara classifying 51% of women as having dense breasts, Quantra classifying 37%, and clinical BI-RADS assessment used to classify 43%. Clinical and automated measures showed similar breast cancer associations; odds ratios for extremely dense breasts versus scattered fibroglandular densities were 1.8 (95% CI: 1.5, 2.2), 1.9 (95% CI: 1.5, 2.5), and 2.3 (95% CI: 1.9, 2.8) for Volpara, Quantra

  14. 2d Model Field Theories at Finite Temperature and Density

    OpenAIRE

    Schoen, Verena; Thies, Michael

    2000-01-01

    In certain 1+1 dimensional field theoretic toy models, one can go all the way from microscopic quarks via the hadron spectrum to the properties of hot and dense baryonic matter in an essentially analytic way. This "miracle" is illustrated through case studies of two popular large N models, the Gross-Neveu and the 't Hooft model - caricatures of the Nambu-Jona-Lasinio model and real QCD, respectively. The main emphasis will be on aspects related to spontaneous symmetry breaking (discrete or co...

  15. Models for turbulent flows with variable density and combustion

    International Nuclear Information System (INIS)

    Jones, W.P.

    1980-01-01

    Models for transport processes and combustion in turbulent flows are outlined with emphasis on the situation where the fuel and air are injected separately. Attention is restricted to relatively simple flames. The flows investigated are high Reynolds number, single-phase, turbulent high-temperature flames in which radiative heat transfer can be considered negligible. Attention is given to the lower order closure models, algebraic stress and flux models, the k-epsilon turbulence model, the diffusion flame approximation, and finite rate reaction mechanisms

  16. Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models

    International Nuclear Information System (INIS)

    Arcadi, Giorgio; Ullio, Piero

    2011-01-01

    Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.

  17. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  18. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  19. Early changes of parotid density and volume predict modifications at the end of therapy and intensity of acute xerostomia.

    Science.gov (United States)

    Belli, Maria Luisa; Scalco, Elisa; Sanguineti, Giuseppe; Fiorino, Claudio; Broggi, Sara; Dinapoli, Nicola; Ricchetti, Francesco; Valentini, Vincenzo; Rizzo, Giovanna; Cattaneo, Giovanni Mauro

    2014-10-01

    To quantitatively assess the predictive power of early variations of parotid gland volume and density on final changes at the end of therapy and, possibly, on acute xerostomia during IMRT for head-neck cancer. Data of 92 parotids (46 patients) were available. Kinetics of the changes during treatment were described by the daily rate of density (rΔρ) and volume (rΔvol) variation based on weekly diagnostic kVCT images. Correlation between early and final changes was investigated as well as the correlation with prospective toxicity data (CTCAEv3.0) collected weekly during treatment for 24/46 patients. A higher rΔρ was observed during the first compared to last week of treatment (-0,50 vs -0,05HU, p-value = 0.0001). Based on early variations, a good estimation of the final changes may be obtained (Δρ: AUC = 0.82, p = 0.0001; Δvol: AUC = 0.77, p = 0.0001). Both early rΔρ and rΔvol predict a higher "mean" acute xerostomia score (≥ median value, 1.57; p-value = 0.01). Median early density rate changes for patients with mean xerostomia score ≥ / xerostomia is well predicted by higher rΔρ and rΔvol in the first two weeks of treatment: best cut-off values were -0.50 HU/day and -380 mm(3)/day for rΔρ and rΔvol respectively. Further studies are necessary to definitively assess the potential of early density/volume changes in identifying more sensitive patients at higher risk of experiencing xerostomia.

  20. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  1. Density-temperature scaling of the fragility in a model glass-former

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Sengupta, Shiladitya; Sastry, Srikanth

    2013-01-01

    . Such a scaling, referred to as density-temperature (DT) scaling, is exact for liquids with inverse power law (IPL) interactions but has also been found to be approximately valid in many non-IPL liquids. We have analyzed the consequences of DT scaling on the density dependence of the fragility in a model glass......Dynamical quantities e.g. diffusivity and relaxation time for some glass-formers may depend on density and temperature through a specific combination, rather than independently, allowing the representation of data over ranges of density and temperature as a function of a single scaling variable......-former. We find the density dependence of kinetic fragility to be weak, and show that it can be understood in terms of DT scaling and deviations of DT scaling at low densities. We also show that the Adam-Gibbs relation exhibits DT scaling and the scaling exponent computed from the density dependence...

  2. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  3. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  4. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  5. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  6. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  7. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  8. A modified soil organic carbon density model for a forest watershed in southern China

    Science.gov (United States)

    Song, Jiangping; Li, Zhongwu; Nie, Xiaodong; Liu, Chun; Xiao, Haibing; Wang, Danyang; Zeng, Guangming

    2017-11-01

    In the context of global climate change, correctly estimating soil organic carbon (SOC) stocks is significant. Because SOC density is the basis for calculating total SOC, exploring the spatial distribution of SOC density is more important. In this study, a typical forest watershed in southern China was analysed. An established exponential model that combined the soil erosion, topography, and average annual rainfall in the region to estimate SOC density with varying soil depth was modified by simulated rainfall experiments and 137Cs (Caesium-137) tracer soil erosion techniques. Thus, a modified exponential model for the SOC density in southern China was established. The results showed that the correlation coefficient (R2) reached 0.870 for the linear regression analysis of the simulated and measured SOC densities. The differences between the measured and simulated SOC densities in different soil layers (0-60 cm) all passed the independent sample t-test. Additionally, the Nash-Sutcliffe coefficient for the simulated and measured SOC densities was 0.97 in the forest watershed. Furthermore, the application of the modified exponential model showed that the measured SOC densities were in good agreement with the simulated SOC densities in the different forest areas tested. These results illustrated that the modified exponential model could be effectively used to simulate the vertical distribution of SOC density in southern China. Because the parameters in the modified exponential model were easy to obtain, this modified model could be applied to simulate the vertical distribution of the SOC density in different geomorphological areas. Therefore, the results of this study will help to understand the global carbon cycle and provide valuable information for constructing the ecological environment of various landscapes.

  9. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  10. Forward modeling of gravity data using geostatistically generated subsurface density variations

    Science.gov (United States)

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  11. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  12. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  13. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  14. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  15. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  16. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  17. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  18. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  19. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  20. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...