WorldWideScience

Sample records for model predicts density

  1. A predictive model for the tokamak density limit

    International Nuclear Information System (INIS)

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; Gates, D. A.; Swerdlow, J.; White, R. B.

    2016-01-01

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Our results are robust to a substantial variation in model parameters within the range of experiments.

  2. Predicting mesh density for adaptive modelling of the global atmosphere.

    Science.gov (United States)

    Weller, Hilary

    2009-11-28

    The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.

  3. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    Science.gov (United States)

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  4. The electron density and temperature distributions predicted by bow shock models of Herbig-Haro objects

    International Nuclear Information System (INIS)

    Noriega-Crespo, A.; Bohm, K.H.; Raga, A.C.

    1990-01-01

    The observable spatial electron density and temperature distributions for series of simple bow shock models, which are of special interest in the study of Herbig-Haro (H-H) objects are computed. The spatial electron density and temperature distributions are derived from forbidden line ratios. It should be possible to use these results to recognize whether an observed electron density or temperature distribution can be attributed to a bow shock, as is the case in some Herbig-Haro objects. As an example, the empirical and predicted distributions for H-H 1 are compared. The predicted electron temperature distributions give the correct temperature range and they show very good diagnostic possibilities if the forbidden O III (4959 + 5007)/4363 wavelength ratio is used. 44 refs

  5. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  6. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    Science.gov (United States)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  7. Using Clinical Factors and Mammographic Breast Density to Estimate Breast Cancer Risk: Development and Validation of a New Predictive Model

    Science.gov (United States)

    Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla

    2009-01-01

    Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752

  8. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    Directory of Open Access Journals (Sweden)

    R. Liu

    2010-09-01

    Full Text Available With the help of four years (2002–2005 of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmin<−100 nT are chosen for a statistical study. In order to achieve a good correlation Em is preconditioned. Contrary to general opinion, Em has to be applied without saturation effect in order to obtain good results for magnetic storms of all activity levels. The memory effect of the thermosphere is accounted for by a weighted integration of Em over the past 3 h. In addition, a lag time of the mass density response to solar wind input of 0 to 4.5 h depending on latitude and local time is considered. A linear model using the preconditioned Em as main controlling parameter for predicting mass density changes during magnetic storms is developed: ρ=0.5 Em + ρamb, where ρamb is based on the mean density during the quiet day before the storm. We show that this simple relation predicts all storm-induced mass density variations at CHAMP altitude fairly well especially if orbital averages are considered.

  9. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    Science.gov (United States)

    Liu, R.; Lühr, H.; Doornbos, E.; Ma, S.-Y.

    2010-09-01

    With the help of four years (2002-2005) of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmineffect in order to obtain good results for magnetic storms of all activity levels. The memory effect of the thermosphere is accounted for by a weighted integration of Em over the past 3 h. In addition, a lag time of the mass density response to solar wind input of 0 to 4.5 h depending on latitude and local time is considered. A linear model using the preconditioned color: #000;">Em as main controlling parameter for predicting mass density changes during magnetic storms is developed: ρ=0.5 color: #000;">Em + ρamb, where ρamb is based on the mean density during the quiet day before the storm. We show that this simple relation predicts all storm-induced mass density variations at CHAMP altitude fairly well especially if orbital averages are considered.

  10. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  11. Electron-Ion Dynamics with Time-Dependent Density Functional Theory: Towards Predictive Solar Cell Modeling: Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Maitra, Neepa [Hunter College City University of New York, New York, NY (United States)

    2016-07-14

    This project investigates the accuracy of currently-used functionals in time-dependent density functional theory, which is today routinely used to predict and design materials and computationally model processes in solar energy conversion. The rigorously-based electron-ion dynamics method developed here sheds light on traditional methods and overcomes challenges those methods have. The fundamental research undertaken here is important for building reliable and practical methods for materials discovery. The ultimate goal is to use these tools for the computational design of new materials for solar cell devices of high efficiency.

  12. Density prediction and dimensionality reduction of mid-term electricity demand in China: A new semiparametric-based additive model

    International Nuclear Information System (INIS)

    Shao, Zhen; Yang, Shan-Lin; Gao, Fei

    2014-01-01

    Highlights: • A new stationary time series smoothing-based semiparametric model is established. • A novel semiparametric additive model based on piecewise smooth is proposed. • We model the uncertainty of data distribution for mid-term electricity forecasting. • We provide efficient long horizon simulation and extraction for external variables. • We provide stable and accurate density predictions for mid-term electricity demand. - Abstract: Accurate mid-term electricity demand forecasting is critical for efficient electric planning, budgeting and operating decisions. Mid-term electricity demand forecasting is notoriously complicated, since the demand is subject to a range of external drivers, such as climate change, economic development, which will exhibit monthly, seasonal, and annual complex variations. Conventional models are based on the assumption that original data is stable and normally distributed, which is generally insignificant in explaining actual demand pattern. This paper proposes a new semiparametric additive model that, in addition to considering the uncertainty of the data distribution, includes practical discussions covering the applications of the external variables. To effectively detach the multi-dimensional volatility of mid-term demand, a novel piecewise smooth method which allows reduction of the data dimensionality is developed. Besides, a semi-parametric procedure that makes use of bootstrap algorithm for density forecast and model estimation is presented. Two typical cases in China are presented to verify the effectiveness of the proposed methodology. The results suggest that both meteorological and economic variables play a critical role in mid-term electricity consumption prediction in China, while the extracted economic factor is adequate to reveal the potentially complex relationship between electricity consumption and economic fluctuation. Overall, the proposed model can be easily applied to mid-term demand forecasting, and

  13. Model comparison on genomic predictions using high-density markers for different groups of bulls in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Su, Guosheng; Janss, Luc

    2013-01-01

    This study compared genomic predictions based on imputed high-density markers (~777,000) in the Nordic Holstein population using a genomic BLUP (GBLUP) model, 4 Bayesian exponential power models with different shape parameters (0.3, 0.5, 0.8, and 1.0) for the exponential power distribution...... relationship with the training population. Groupsmgs had both the sire and the maternal grandsire (MGS), Groupsire only had the sire, Groupmgs only had the MGS, and Groupnon had neither the sire nor the MGS in the training population. Reliability of DGV was measured as the squared correlation between DGV...... and DRP divided by the reliability of DRP for the bulls in validation data set. Unbiasedness of DGV was measured as the regression of DRP on DGV. The results indicated that DGV were more accurate and less biased for animals that were more related to the training population. In general, the Bayesian...

  14. Kinetic modeling of low density lipoprotein oxidation in arterial wall and its application in atherosclerotic lesions prediction.

    Science.gov (United States)

    Karimi, Safoora; Dadvar, Mitra; Modarress, Hamid; Dabir, Bahram

    2013-01-01

    Oxidation of low-density lipoprotein (LDL) is one of the major factors in atherogenic process. Trapped oxidized LDL (Ox-LDL) in the subendothelial matrix is taken up by macrophage and leads to foam cell generation creating the first step in atherosclerosis development. Many researchers have studied LDL oxidation using in vitro cell-induced LDL oxidation model. The present study provides a kinetic model for LDL oxidation in intima layer that can be used in modeling of atherosclerotic lesions development. This is accomplished by considering lipid peroxidation kinetic in LDL through a system of elementary reactions. In comparison, characteristics of our proposed kinetic model are consistent with the results of previous experimental models from other researches. Furthermore, our proposed LDL oxidation model is added to the mass transfer equation in order to predict the LDL concentration distribution in intima layer which is usually difficult to measure experimentally. According to the results, LDL oxidation kinetic constant is an important parameter that affects LDL concentration in intima layer so that existence of antioxidants that is responsible for the reduction of initiating rates and prevention of radical formations, have increased the concentration of LDL in intima by reducing the LDL oxidation rate. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Using dynamic energy budget modeling to predict the influence of temperature and food density on the effect of Cu on earthworm mediated litter consumption.

    NARCIS (Netherlands)

    Hobbelen, P.H.F.; van Gestel, C.A.M.

    2007-01-01

    The aim of this study was to predict the dependence on temperature and food density of effects of Cu on the litter consumption by the earthworm Lumbricus rubellus, using a dynamic energy budget model (DEB-model). As a measure of the effects of Cu on food consumption, EC50s (soil concentrations

  16. Adsorption of CH4 on nitrogen- and boron-containing carbon models of coal predicted by density-functional theory

    Science.gov (United States)

    Liu, Xiao-Qiang; Xue, Ying; Tian, Zhi-Yue; Mo, Jing-Jing; Qiu, Nian-Xiang; Chu, Wei; Xie, He-Ping

    2013-11-01

    Graphene doped by nitrogen (N) and/or boron (B) is used to represent the surface models of coal with the structural heterogeneity. Through the density functional theory (DFT) calculations, the interactions between coalbed methane (CBM) and coal surfaces have been investigated. Several adsorption sites and orientations of methane (CH4) on graphenes were systematically considered. Our calculations predicted adsorption energies of CH4 on graphenes of up to -0.179 eV, with the strongest binding mode in which three hydrogen atoms of CH4 direct to graphene surface, observed for N-doped graphene, compared to the perfect (-0.154 eV), B-doped (-0.150 eV), and NB-doped graphenes (-0.170 eV). Doping N in graphene increases the adsorption energies of CH4, but slightly reduced binding is found when graphene is doped by B. Our results indicate that all of graphenes act as the role of a weak electron acceptor with respect to CH4. The interactions between CH4 and graphenes are the physical adsorption and slightly depend upon the adsorption sites on graphenes and the orientations of methane as well as the electronegativity of dopant atoms in graphene.

  17. Prediction models for density and viscosity of biodiesel and their effects on fuel supply system in CI engines

    Energy Technology Data Exchange (ETDEWEB)

    Tesfa, B.; Mishra, R.; Gu, F. [Computing and Engineering, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH (United Kingdom); Powles, N. [Chemistry and Forensic Science, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH (United Kingdom)

    2010-12-15

    Biodiesel is a promising non-toxic and biodegradable alternative fuel used in the transport sector. Nevertheless, the higher viscosity and density of biodiesel poses some acute problems when it is used it in unmodified engine. Taking this into consideration, this study has been focused towards two objectives. The first objective is to identify the effect of temperature on density and viscosity for a variety of biodiesels and also to develop a correlation between density and viscosity for these biodiesels. The second objective is to investigate and quantify the effects of density and viscosity of the biodiesels and their blends on various components of the engine fuel supply system such as fuel pump, fuel filters and fuel injector. To achieve first objective density and viscosity of rapeseed oil biodiesel, corn oil biodiesel and waste oil biodiesel blends (0B, 5B, 10B, 20B, 50B, 75B, and 100B) were tested at different temperatures using EN ISO 3675:1998 and EN ISO 3104:1996 standards. For both density and viscosity new correlations were developed and compared with published literature. A new correlation between biodiesel density and biodiesel viscosity was also developed. The second objective was achieved by using analytical models showing the effects of density and viscosity on the performance of fuel supply system. These effects were quantified over a wide range of engine operating conditions. It can be seen that the higher density and viscosity of biodiesel have a significant impact on the performance of fuel pumps and fuel filters as well as on air-fuel mixing behaviour of compression ignition (CI) engine. (author)

  18. A coupled diffusion-fluid pressure model to predict cell density distribution for cells encapsulated in a porous hydrogel scaffold under mechanical loading.

    Science.gov (United States)

    Zhao, Feihu; Vaughan, Ted J; Mc Garrigle, Myles J; McNamara, Laoise M

    2017-10-01

    Tissue formation within tissue engineering (TE) scaffolds is preceded by growth of the cells throughout the scaffold volume and attachment of cells to the scaffold substrate. It is known that mechanical stimulation, in the form of fluid perfusion or mechanical strain, enhances cell differentiation and overall tissue formation. However, due to the complex multi-physics environment of cells within TE scaffolds, cell transport under mechanical stimulation is not fully understood. Therefore, in this study, we have developed a coupled multiphysics model to predict cell density distribution in a TE scaffold. In this model, cell transport is modelled as a thermal conduction process, which is driven by the pore fluid pressure under applied loading. As a case study, the model is investigated to predict the cell density patterns of pre-osteoblasts MC3T3-e1 cells under a range of different loading regimes, to obtain an understanding of desirable mechanical stimulation that will enhance cell density distribution within TE scaffolds. The results of this study have demonstrated that fluid perfusion can result in a higher cell density in the scaffold region closed to the outlet, while cell density distribution under mechanical compression was similar with static condition. More importantly, the study provides a novel computational approach to predict cell distribution in TE scaffolds under mechanical loading. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  20. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  1. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  2. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Finite element model predicts current density distribution for clinical applications of tDCS and tACS

    Directory of Open Access Journals (Sweden)

    Toralf eNeuling

    2012-09-01

    Full Text Available Transcranial direct current stimulation (tDCS has been applied in numerous scientific studies over the past decade. However, the possibility to apply tDCS in therapy of neuropsychiatric disorders is still debated. While transcranial magnetic stimulation (TMS has been approved for treatment of major depression in the United States by the Food and Drug Administration (FDA, tDCS is not as widely accepted. One of the criticisms against tDCS is the lack of spatial specificity. Focality is limited by the electrode size (35 cm2 are commonly used and the bipolar arrangement. However, a current flow through the head directly from anode to cathode is an outdated view. Finite element (FE models have recently been used to predict the exact current flow during tDCS. These simulations have demonstrated that the current flow depends on tissue shape and conductivity. Toface the challenge to predict the location, magnitude and direction of the current flow induced by tDCS and transcranial alternating current stimulation (tACS, we used a refined realistic FE modeling approach. With respect to the literature on clinical tDCS and tACS, we analyzed two common setups for the location of the stimulation electrodes which target the frontal lobe and the occipital lobe, respectively. We compared lateral and medial electrode configuration with regard to theirusability. We were able to demonstrate that the lateral configurations yielded more focused stimulation areas as well as higher current intensities in the target areas. The high resolution of our simulation allows one to combine the modeled current flow with the knowledge of neuronal orientation to predict the consequences of tDCS and tACS. Our results not only offer a basis for a deeper understanding of the stimulation sites currently in use for clinical applications but also offer a better interpretation of observed effects.

  4. Phalangeal bone mineral density predicts incident fractures

    DEFF Research Database (Denmark)

    Friis-Holmberg, Teresa; Brixen, Kim; Rubin, Katrine Hass

    2012-01-01

    This prospective study investigates the use of phalangeal bone mineral density (BMD) in predicting fractures in a cohort (15,542) who underwent a BMD scan. In both women and men, a decrease in BMD was associated with an increased risk of fracture when adjusted for age and prevalent fractures...

  5. Towards predicting wading bird densities from predicted prey densities in a post-barrage Severn estuary

    International Nuclear Information System (INIS)

    Goss-Custard, J.D.; McGrorty, S.; Clarke, R.T.; Pearson, B.; Rispin, W.E.; Durell, S.E.A. le V. dit; Rose, R.J.; Warwick, R.M.; Kirby, R.

    1991-01-01

    A winter survey of seven species of wading birds in six estuaries in south-west England was made to develop a method for predicting bird densities should a tidal power barrage be built on the Severn estuary. Within most estuaries, bird densities correlated with the densities of widely taken prey species. A barrage would substantially reduce the area of intertidal flats available at low water for the birds to feed but the invertebrate density could increase in the generally more benign post-barrage environmental conditions. Wader densities would have to increase approximately twofold to allow the same overall numbers of birds to remain post-barrage as occur on the Severn at present. Provisional estimates are given of the increases in prey density required to allow bird densities to increase by this amount. With the exception of the prey of dunlin, these fall well within the ranges of densities found in other estuaries, and so could in principle be attained in the post-barrage Severn. An attempt was made to derive equations with which to predict post-barrage densities of invertebrates from easily measured, static environmental variables. The fact that a site was in the Severn had a significant additional effect on invertebrate density in seven cases. This suggests that there is a special feature of the Severn, probably one associated with its highly dynamic nature. This factor must be identified if the post-barrage densities of invertebrates are to be successful predicted. (author)

  6. Hounsfield unit density accurately predicts ESWL success.

    Science.gov (United States)

    Magnuson, William J; Tomera, Kevin M; Lance, Raymond S

    2005-01-01

    Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p ESWL for upper tract calculi between 5-15mm.

  7. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    NARCIS (Netherlands)

    Liu, R.; Lühr, H.; Doornbos, E.; Ma, S.Y.

    2010-01-01

    With the help of four years (2002–2005) of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmin

  8. Predicting grizzly bear density in western North America.

    Science.gov (United States)

    Mowat, Garth; Heard, Douglas C; Schwarz, Carl J

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  9. Predicting grizzly bear density in western North America.

    Directory of Open Access Journals (Sweden)

    Garth Mowat

    Full Text Available Conservation of grizzly bears (Ursus arctos is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  10. Prediction of bending moment resistance of screw connected joints in plywood members using regression models and compare with that commercial medium density fiberboard (MDF and particleboard

    Directory of Open Access Journals (Sweden)

    Sadegh Maleki

    2014-11-01

    Full Text Available The study aimed at predicting bending moment resistance plywood of screw (coarse and fine threads joints using regression models. Thickness of the member was 19mm and compared with medium density fiberboard (MDF and particleboard with 18mm thicknesses. Two types of screws including coarse and fine thread drywall screw with nominal diameters of 6, 8 and 10mm and 3.5, 4 and 5 cm length respectively and sheet metal screw with diameters of 8 and 10 and length of 4 cm were used. The results of the study have shown that bending moment resistance of screw was increased by increasing of screws diameter and penetrating depth. Screw Length was found to have a larger influence on bending moment resistance than screw diameter. Bending moment resistance with coarse thread drywall screws was higher than those of fine thread drywall screws. The highest bending moment resistance (71.76 N.m was observed in joints made with coarse screw which were 5 mm in diameter and 28 mm in depth of penetration. The lowest bending moment resistance (12.08 N.m was observed in joints having fine screw with 3.5 mm diameter and 9 mm penetrations. Furthermore, bending moment resistance in plywood was higher than those of medium density fiberboard (MDF and particleboard. Finally, it has been found that the ultimate bending moment resistance of plywood joint can be predicted following formula Wc = 0.189×D0.726×P0.577 for coarse thread drywall screws and Wf = 0.086×D0.942×P0.704 for fine ones according to diameter and penetrating depth. The analysis of variance of the experimental and predicted data showed that the developed models provide a fair approximation of actual experimental measurements.

  11. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging.

    Science.gov (United States)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan Conrad

    2015-11-01

    We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: http://live.ece.utexas.edu/research/fog/index.html.

  12. Adsorption of CH{sub 4} on nitrogen- and boron-containing carbon models of coal predicted by density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiao-Qiang [College of Chemistry, Key Lab of Green Chemistry and Technology in Ministry of Education, Sichuan University, Chengdu 610064 (China); Xue, Ying, E-mail: yxue@scu.edu.cn [College of Chemistry, Key Lab of Green Chemistry and Technology in Ministry of Education, Sichuan University, Chengdu 610064 (China); Tian, Zhi-Yue; Mo, Jing-Jing; Qiu, Nian-Xiang [College of Chemistry, Key Lab of Green Chemistry and Technology in Ministry of Education, Sichuan University, Chengdu 610064 (China); Chu, Wei [Department of Chemical Engineering, Sichuan University, Chengdu 610065 (China); Xie, He-Ping [Key Laboratory of Energy Engineering Safety and Mechanics on Disasters, The Ministry of Education, Sichuan University, Chengdu 610065 (China)

    2013-11-15

    Graphene doped by nitrogen (N) and/or boron (B) is used to represent the surface models of coal with the structural heterogeneity. Through the density functional theory (DFT) calculations, the interactions between coalbed methane (CBM) and coal surfaces have been investigated. Several adsorption sites and orientations of methane (CH{sub 4}) on graphenes were systematically considered. Our calculations predicted adsorption energies of CH{sub 4} on graphenes of up to −0.179 eV, with the strongest binding mode in which three hydrogen atoms of CH{sub 4} direct to graphene surface, observed for N-doped graphene, compared to the perfect (−0.154 eV), B-doped (−0.150 eV), and NB-doped graphenes (−0.170 eV). Doping N in graphene increases the adsorption energies of CH{sub 4}, but slightly reduced binding is found when graphene is doped by B. Our results indicate that all of graphenes act as the role of a weak electron acceptor with respect to CH{sub 4}. The interactions between CH{sub 4} and graphenes are the physical adsorption and slightly depend upon the adsorption sites on graphenes and the orientations of methane as well as the electronegativity of dopant atoms in graphene.

  13. Baryon density in alternative BBN models

    International Nuclear Information System (INIS)

    Kirilova, D.

    2002-10-01

    We present recent determinations of the cosmological baryon density ρ b , extracted from different kinds of observational data. The baryon density range is not very wide and is usually interpreted as an indication for consistency. It is interesting to note that all other determinations give higher baryon density than the standard big bang nucleosynthesis (BBN) model. The differences of the ρ b values from the BBN predicted one (the most precise today) may be due to the statistical and systematic errors in observations. However, they may be an indication of new physics. Hence, it is interesting to study alternative BBN models, and the possibility to resolve the discrepancies. We discuss alternative cosmological scenarios: a BBN model with decaying particles (m ∼ MeV, τ ∼ sec) and BBN with electron-sterile neutrino oscillations, which permit to relax BBN constraints on the baryon content of the Universe. (author)

  14. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  15. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  16. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  17. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  18. Multiple model cardinalized probability hypothesis density filter

    Science.gov (United States)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  19. Escherichia coli bacteria density in relation to turbidity, streamflow characteristics, and season in the Chattahoochee River near Atlanta, Georgia, October 2000 through September 2008—Description, statistical analysis, and predictive modeling

    Science.gov (United States)

    Lawrence, Stephen J.

    2012-01-01

    Water-based recreation—such as rafting, canoeing, and fishing—is popular among visitors to the Chattahoochee River National Recreation Area (CRNRA) in north Georgia. The CRNRA is a 48-mile reach of the Chattahoochee River upstream from Atlanta, Georgia, managed by the National Park Service (NPS). Historically, high densities of fecal-indicator bacteria have been documented in the Chattahoochee River and its tributaries at levels that commonly exceeded Georgia water-quality standards. In October 2000, the NPS partnered with the U.S. Geological Survey (USGS), State and local agencies, and non-governmental organizations to monitor Escherichia coli bacteria (E. coli) density and develop a system to alert river users when E. coli densities exceeded the U.S. Environmental Protection Agency (USEPA) single-sample beach criterion of 235 colonies (most probable number) per 100 milliliters (MPN/100 mL) of water. This program, called BacteriALERT, monitors E. coli density, turbidity, and water temperature at two sites on the Chattahoochee River upstream from Atlanta, Georgia. This report summarizes E. coli bacteria density and turbidity values in water samples collected between 2000 and 2008 as part of the BacteriALERT program; describes the relations between E. coli density and turbidity, streamflow characteristics, and season; and describes the regression analyses used to develop predictive models that estimate E. coli density in real time at both sampling sites.

  20. Crystal density predictions for nitramines based on quantum chemistry

    International Nuclear Information System (INIS)

    Qiu Ling; Xiao Heming; Gong Xuedong; Ju Xuehai; Zhu Weihua

    2007-01-01

    An efficient and convenient method for predicting the crystalline densities of energetic materials was established based on the quantum chemical computations. Density functional theory (DFT) with four different basis sets (6-31G**, 6-311G**, 6-31+G**, and 6-311++G**) and various semiempirical molecular orbital (MO) methods have been employed to predict the molecular volumes and densities of a series of energetic nitramines including acyclic, monocyclic, and polycyclic/cage molecules. The relationships between the calculated values and experimental data were discussed in detail, and linear correlations were suggested and compared at different levels. The calculation shows that if the selected basis set is larger, it will expend more CPU (central processing unit) time, larger molecular volume and smaller density will be obtained. And the densities predicted by the semiempirical MO methods are all systematically larger than the experimental data. In comparison with other methods, B3LYP/6-31G** is most accurate and economical to predict the solid-state densities of energetic nitramines. This may be instructive to the molecular designing and screening novel HEDMs

  1. A cosmological model with compact space sections and low mass density

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1982-01-01

    A general relativistic cosmological model is presented, which has closed space sections and mass density below a critical density similar to that of Friedmann's models. The model may predict double images of cosmic sources. (Author) [pt

  2. Ab Initio Predictions of Structures and Densities of Energetic Solids

    National Research Council Canada - National Science Library

    Rice, Betsy M; Sorescu, Dan C

    2004-01-01

    We have applied a powerful simulation methodology known as ab initio crystal prediction to assess the ability of a generalized model of CHNO intermolecular interactions to predict accurately crystal...

  3. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    Science.gov (United States)

    Dunne, M. G.; McCarthy, P. J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.; the ASDEX Upgrade Team

    2012-12-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications.

  4. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Dunne, M.G.; McCarthy, P.J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.

    2012-01-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications. (paper)

  5. Contrasting cue-density effects in causal and prediction judgments.

    Science.gov (United States)

    Vadillo, Miguel A; Musca, Serban C; Blanco, Fernando; Matute, Helena

    2011-02-01

    Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.

  6. Dual model for parton densities

    International Nuclear Information System (INIS)

    El Hassouni, A.; Napoly, O.

    1981-01-01

    We derive power-counting rules for quark densities near x=1 and x=0 from parton interpretations of one-particle inclusive dual amplitudes. Using these rules, we give explicit expressions for quark distributions (including charm) inside hadrons. We can then show the compatibility between fragmentation and recombination descriptions of low-p/sub perpendicular/ processes

  7. Combining Predictive Densities using Nonlinear Filtering with Applications to US Economics Data

    NARCIS (Netherlands)

    M. Billio (Monica); R. Casarin (Roberto); F. Ravazzolo (Francesco); H.K. van Dijk (Herman)

    2011-01-01

    textabstractWe propose a multivariate combination approach to prediction based on a distributional state space representation of the weights belonging to a set of Bayesian predictive densities which have been obtained from alternative models. Several specifications of multivariate time-varying

  8. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety

  9. Predicting oak density with ecological, physical, and soil indicators

    Science.gov (United States)

    Callie Jo Schweitzer; Adrian A. Lesak; Yong Wang

    2006-01-01

    We predicted density of oak species in the mid-Cumberland Plateau region of northeastern Alabama on the basis of basal area of tree associations based on light tolerances, physical site characteristics, and soil type. Tree basal area was determined for four species groups: oaks (Quercus spp.), hickories (Carya spp.), yellow-poplar...

  10. MODEL OF THE TOKAMAK EDGE DENSITY PEDESTAL INCLUDING DIFFUSIVE NEUTRALS

    International Nuclear Information System (INIS)

    BURRELL, K.H.

    2003-01-01

    OAK-B135 Several previous analytic models of the tokamak edge density pedestal have been based on diffusive transport of plasma plus free-streaming of neutrals. This latter neutral model includes only the effect of ionization and neglects charge exchange. The present work models the edge density pedestal using diffusive transport for both the plasma and the neutrals. In contrast to the free-streaming model, a diffusion model for the neutrals includes the effect of both charge exchange and ionization and is valid when charge exchange is the dominant interaction. Surprisingly, the functional forms for the electron and neutral density profiles from the present calculation are identical to the results of the previous analytic models. There are some differences in the detailed definition of various parameters in the solution. For experimentally relevant cases where ionization and charge exchange rate are comparable, both models predict approximately the same width for the edge density pedestal

  11. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  12. Prediction of crack density and electrical resistance changes in indium tin oxide/polymer thin films under tensile loading

    KAUST Repository

    Mora Cordova, Angel; Khan, Kamran; El Sayed, Tamer

    2014-01-01

    We present unified predictions for the crack onset strain, evolution of crack density, and changes in electrical resistance in indium tin oxide/polymer thin films under tensile loading. We propose a damage mechanics model to quantify and predict

  13. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  14. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    Science.gov (United States)

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  15. Prediction of density limits in tokamaks: Theory, comparison with experiment, and application to the proposed Fusion Ignition Research Experiment

    International Nuclear Information System (INIS)

    Stacey, Weston M.

    2002-01-01

    A framework for the predictive calculation of density limits in future tokamaks is proposed. Theoretical models for different density limit phenomena are summarized, and the requirements for additional models are identified. These theoretical density limit models have been incorporated into a relatively simple, but phenomenologically comprehensive, integrated numerical calculation of the core, edge, and divertor plasmas and of the recycling neutrals, in order to obtain plasma parameters needed for the evaluation of the theoretical models. A comparison of these theoretical predictions with observed density limits in current experiments is summarized. A model for the calculation of edge pedestal parameters, which is needed in order to apply the density limit predictions to future tokamaks, is summarized. An application to predict the proximity to density limits and the edge pedestal parameters of the proposed Fusion Ignition Research Experiment is described

  16. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  17. Effect of bacteria density and accumulated inert solids on the effluent pollutant concentrations predicted by the constructed wetlands model BIO_PORE

    OpenAIRE

    Samsó Campà, Roger; Blazquez, Jordi; Agullo Chaler, Nuria; Grau Barceló, Joan; Torres Cámara, Ricardo; García Serrano, Joan

    2015-01-01

    Constructed wetlands are a widely adopted technology for the treatment of wastewater in small communities. The understanding of their internal functioning has increased at an unprecedented pace over recent years, in part thanks to the use of mathematical models. BIO_PORE model is one of the most recent models developed for constructed wetlands. This model was built in the COMSOL Multiphysics (TM) software and implements the biokinetic expressions of Constructed Wetlands Model 1 (CWM1) to desc...

  18. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids.

  19. Predictive Modeling of Black Spruce (Picea mariana (Mill. B.S.P. Wood Density Using Stand Structure Variables Derived from Airborne LiDAR Data in Boreal Forests of Ontario

    Directory of Open Access Journals (Sweden)

    Bharat Pokharel

    2016-12-01

    Full Text Available Our objective was to model the average wood density in black spruce trees in representative stands across a boreal forest landscape based on relationships with predictor variables extracted from airborne light detection and ranging (LiDAR point cloud data. Increment core samples were collected from dominant or co-dominant black spruce trees in a network of 400 m2 plots distributed among forest stands representing the full range of species composition and stand development across a 1,231,707 ha forest management unit in northeastern Ontario, Canada. Wood quality data were generated from optical microscopy, image analysis, X-ray densitometry and diffractometry as employed in SilviScan™. Each increment core was associated with a set of field measurements at the plot level as well as a suite of LiDAR-derived variables calculated on a 20 × 20 m raster from a wall-to-wall coverage at a resolution of ~1 point m−2. We used a multiple linear regression approach to identify important predictor variables and describe relationships between stand structure and wood density for average black spruce trees in the stands we observed. A hierarchical classification model was then fitted using random forests to make spatial predictions of mean wood density for average trees in black spruce stands. The model explained 39 percent of the variance in the response variable, with an estimated root mean square error of 38.8 (kg·m−3. Among the predictor variables, P20 (second decile LiDAR height in m and quadratic mean diameter were most important. Other predictors describing canopy depth and cover were of secondary importance and differed according to the modeling approach. LiDAR-derived variables appear to capture differences in stand structure that reflect different constraints on growth rates, determining the proportion of thin-walled earlywood cells in black spruce stems, and ultimately influencing the pattern of variation in important wood quality attributes

  20. Compatible growth models and stand density diagrams

    International Nuclear Information System (INIS)

    Smith, N.J.; Brand, D.G.

    1988-01-01

    This paper discusses a stand average growth model based on the self-thinning rule developed and used to generate stand density diagrams. Procedures involved in testing are described and results are included

  1. Modelling of Resonantly Forced Density Waves in Dense Planetary Rings

    Science.gov (United States)

    Lehmann, M.; Schmidt, J.; Salo, H.

    2014-04-01

    Density wave theory, originally proposed to explain the spiral structure of galactic disks, has been applied to explain parts of the complex sub-structure in Saturn's rings, such as the wavetrains excited at the inner Lindblad resonances (ILR) of various satellites. The linear theory for the excitation and damping of density waves in Saturn's rings is fairly well developed (e.g. Goldreich & Tremaine [1979]; Shu [1984]). However, it fails to describe certain aspects of the observed waves. The non-applicability of the linear theory is already indicated by the "cusplike" shape of many of the observed wave profiles. This is a typical nonlinear feature which is also present in overstability wavetrains (Schmidt & Salo [2003]; Latter & Ogilvie [2010]). In particular, it turns out that the detailed damping mechanism, as well as the role of different nonlinear effects on the propagation of density waves remain intransparent. First attemps are being made to investigate the excitation and propagation of nonlinear density waves within a hydrodynamical formalism, which is also the natural formalism for describing linear density waves. A simple weakly nonlinear model, derived from a multiple-scale expansion of the hydrodynamic equations, is presented. This model describes the damping of "free" spiral density waves in a vertically integrated fluid disk with density dependent transport coefficients, where the effects of the hydrodynamic nonlinearities are included. The model predicts that density waves are linearly unstable in a ring region where the conditions for viscous overstability are met, which translates to a steep dependence of the shear viscosity with respect to the disk's surface density. The possibility that this dependence could lead to a growth of density waves with increasing distance from the resonance, was already mentioned in Goldreich & Tremaine [1978]. Sufficiently far away from the ILR, the surface density perturbation caused by the wave, is predicted to

  2. Thermospheric density and satellite drag modeling

    Science.gov (United States)

    Mehta, Piyush Mukesh

    The United States depends heavily on its space infrastructure for a vast number of commercial and military applications. Space Situational Awareness (SSA) and Threat Assessment require maintaining accurate knowledge of the orbits of resident space objects (RSOs) and the associated uncertainties. Atmospheric drag is the largest source of uncertainty for low-perigee RSOs. The uncertainty stems from inaccurate modeling of neutral atmospheric mass density and inaccurate modeling of the interaction between the atmosphere and the RSO. In order to reduce the uncertainty in drag modeling, both atmospheric density and drag coefficient (CD) models need to be improved. Early atmospheric density models were developed from orbital drag data or observations of a few early compact satellites. To simplify calculations, densities derived from orbit data used a fixed CD value of 2.2 measured in a laboratory using clean surfaces. Measurements from pressure gauges obtained in the early 1990s have confirmed the adsorption of atomic oxygen on satellite surfaces. The varying levels of adsorbed oxygen along with the constantly changing atmospheric conditions cause large variations in CD with altitude and along the orbit of the satellite. Therefore, the use of a fixed CD in early development has resulted in large biases in atmospheric density models. A technique for generating corrections to empirical density models using precision orbit ephemerides (POE) as measurements in an optimal orbit determination process was recently developed. The process generates simultaneous corrections to the atmospheric density and ballistic coefficient (BC) by modeling the corrections as statistical exponentially decaying Gauss-Markov processes. The technique has been successfully implemented in generating density corrections using the CHAMP and GRACE satellites. This work examines the effectiveness, specifically the transfer of density models errors into BC estimates, of the technique using the CHAMP and

  3. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  4. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  5. Combinatorial nuclear level-density model

    International Nuclear Information System (INIS)

    Uhrenholt, H.; Åberg, S.; Dobrowolski, A.; Døssing, Th.; Ichikawa, T.; Möller, P.

    2013-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level-distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: level spacings at neutron separation energy, data on total level-density functions from the Oslo method, cumulative level densities from low-lying discrete states, and data on parity ratios. Spherical and deformed nuclei follow basically different coupling schemes, and we focus on deformed nuclei

  6. Measured and predicted electron density at 600 km over Tucuman and Huancayo

    International Nuclear Information System (INIS)

    Ezquer, R.G.; Cabrera, M.A.; Araoz, L.; Mosert, M.; Radicella, S.M.

    2002-01-01

    The electron density at 600 Km of altitude (N 600 ) predicted by IRI are compared with the measurements for a given particular time and place (not average) obtained with the Japanese Hinotori satellite. The results showed that the best agreement among predictions and measurements were obtained near the magnetic equator. Disagreements about 50% were observed near the southern peak of the equatorial anomaly (EA), when the model uses the CCIR and URSI options to obtain the peak characteristics. (author)

  7. Transport critical current density in flux creep model

    International Nuclear Information System (INIS)

    Wang, J.; Taylor, K.N.R.; Russell, G.J.; Yue, Y.

    1992-01-01

    The magnetic flux creep model has been used to derive the temperature dependence of the critical current density in high temperature superconductors. The generally positive curvature of the J c -T diagram is predicted in terms of two interdependent dimensionless fitting parameters. In this paper, the results are compared with both SIS and SNS junction models of these granular materials, neither of which provides a satisfactory prediction of the experimental data. A hybrid model combining the flux creep and SNS mechanisms is shown to be able to account for the linear regions of the J c -T behavior which are observed in some materials

  8. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NARCIS (Netherlands)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-01-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a

  9. The Density Functional Theory of Flies: Predicting distributions of interacting active organisms

    Science.gov (United States)

    Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas

    On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.

  10. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  11. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  12. Simplified local density model for adsorption over large pressure ranges

    International Nuclear Information System (INIS)

    Rangarajan, B.; Lira, C.T.; Subramanian, R.

    1995-01-01

    Physical adsorption of high-pressure fluids onto solids is of interest in the transportation and storage of fuel and radioactive gases; the separation and purification of lower hydrocarbons; solid-phase extractions; adsorbent regenerations using supercritical fluids; supercritical fluid chromatography; and critical point drying. A mean-field model is developed that superimposes the fluid-solid potential on a fluid equation of state to predict adsorption on a flat wall from vapor, liquid, and supercritical phases. A van der Waals-type equation of state is used to represent the fluid phase, and is simplified with a local density approximation for calculating the configurational energy of the inhomogeneous fluid. The simplified local density approximation makes the model tractable for routine calculations over wide pressure ranges. The model is capable of prediction of Type 2 and 3 subcritical isotherms for adsorption on a flat wall, and shows the characteristic cusplike behavior and crossovers seen experimentally near the fluid critical point

  13. Predicting Intra-Urban Population Densities in Africa using SAR and Optical Remote Sensing Data

    Science.gov (United States)

    Linard, C.; Steele, J.; Forget, Y.; Lopez, J.; Shimoni, M.

    2017-12-01

    The population of Africa is predicted to double over the next 40 years, driving profound social, environmental and epidemiological changes within rapidly growing cities. Estimations of within-city variations in population density must be improved in order to take urban heterogeneities into account and better help urban research and decision making, especially for vulnerability and health assessments. Satellite remote sensing offers an effective solution for mapping settlements and monitoring urbanization at different spatial and temporal scales. In Africa, the urban landscape is covered by slums and small houses, where the heterogeneity is high and where the man-made materials are natural. Innovative methods that combine optical and SAR data are therefore necessary for improving settlement mapping and population density predictions. An automatic method was developed to estimate built-up densities using recent and archived optical and SAR data and a multi-temporal database of built-up densities was produced for 48 African cities. Geo-statistical methods were then used to study the relationships between census-derived population densities and satellite-derived built-up attributes. Best predictors were combined in a Random Forest framework in order to predict intra-urban variations in population density in any large African city. Models show significant improvement of our spatial understanding of urbanization and urban population distribution in Africa in comparison to the state of the art.

  14. and density-dependent quark mass model

    Indian Academy of Sciences (India)

    Since a fair proportion of such dense proto stars are likely to be ... the temperature- and density-dependent quark mass (TDDQM) model which we had em- ployed in .... instead of Tc ~170 MeV which is a favoured value for the ud matter [26].

  15. Models for Experimental High Density Housing

    Science.gov (United States)

    Bradecki, Tomasz; Swoboda, Julia; Nowak, Katarzyna; Dziechciarz, Klaudia

    2017-10-01

    The article presents the effects of research on models of high density housing. The authors present urban projects for experimental high density housing estates. The design was based on research performed on 38 examples of similar housing in Poland that have been built after 2003. Some of the case studies show extreme density and that inspired the researchers to test individual virtual solutions that would answer the question: How far can we push the limits? The experimental housing projects show strengths and weaknesses of design driven only by such indexes as FAR (floor attenuation ratio - housing density) and DPH (dwellings per hectare). Although such projects are implemented, the authors believe that there are reasons for limits since high index values may be in contradiction to the optimum character of housing environment. Virtual models on virtual plots presented by the authors were oriented toward maximising the DPH index and DAI (dwellings area index) which is very often the main driver for developers. The authors also raise the question of sustainability of such solutions. The research was carried out in the URBAN model research group (Gliwice, Poland) that consists of academic researchers and architecture students. The models reflect architectural and urban regulations that are valid in Poland. Conclusions might be helpful for urban planners, urban designers, developers, architects and architecture students.

  16. Predicting insect migration density and speed in the daytime convective boundary layer.

    Directory of Open Access Journals (Sweden)

    James R Bell

    Full Text Available Insect migration needs to be quantified if spatial and temporal patterns in populations are to be resolved. Yet so little ecology is understood above the flight boundary layer (i.e. >10 m where in north-west Europe an estimated 3 billion insects km(-1 month(-1 comprising pests, beneficial insects and other species that contribute to biodiversity use the atmosphere to migrate. Consequently, we elucidate meteorological mechanisms principally related to wind speed and temperature that drive variation in daytime aerial density and insect displacements speeds with increasing altitude (150-1200 m above ground level. We derived average aerial densities and displacement speeds of 1.7 million insects in the daytime convective atmospheric boundary layer using vertical-looking entomological radars. We first studied patterns of insect aerial densities and displacements speeds over a decade and linked these with average temperatures and wind velocities from a numerical weather prediction model. Generalized linear mixed models showed that average insect densities decline with increasing wind speed and increase with increasing temperatures and that the relationship between displacement speed and density was negative. We then sought to derive how general these patterns were over space using a paired site approach in which the relationship between sites was examined using simple linear regression. Both average speeds and densities were predicted remotely from a site over 100 km away, although insect densities were much noisier due to local 'spiking'. By late morning and afternoon when insects are migrating in a well-developed convective atmosphere at high altitude, they become much more difficult to predict remotely than during the early morning and at lower altitudes. Overall, our findings suggest that predicting migrating insects at altitude at distances of ≈ 100 km is promising, but additional radars are needed to parameterise spatial covariance.

  17. Predictive densities for day-ahead electricity prices using time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre; Madsen, Henrik

    2014-01-01

    A large part of the decision-making problems actors of the power system are facing on a daily basis requires scenarios for day-ahead electricity market prices. These scenarios are most likely to be generated based on marginal predictive densities for such prices, then enhanced with a temporal...... dependence structure. A semi-parametric methodology for generating such densities is presented: it includes: (i) a time-adaptive quantile regression model for the 5%–95% quantiles; and (ii) a description of the distribution tails with exponential distributions. The forecasting skill of the proposed model...

  18. Prediction of nanofluids properties: the density and the heat capacity

    Science.gov (United States)

    Zhelezny, V. P.; Motovoy, I. V.; Ustyuzhanin, E. E.

    2017-11-01

    The results given in this report show that the additives of Al2O3 nanoparticles lead to increase the density and decrease the heat capacity of isopropanol. Based on the experimental data the excess molar volume and the excess molar heat capacity were calculated. The report suggests new method for predicting the molar volume and molar heat capacity of nanofluids. It is established that the values of the excess thermodynamic functions are determined by the properties and the volume of the structurally oriented layers of the base fluid molecules near the surface of nanoparticles. The heat capacity of the structurally oriented layers of the base fluid is less than the heat capacity of the base fluid for given parameters due to the greater regulation of its structure. It is shown that information on the geometric dimensions of the structured layers of the base fluid near nanoparticles can be obtained from data on the nanofluids density and at ambient temperature - by the dynamic light scattering method. For calculations of the nanofluids heat capacity over a wide range of temperatures a new correlation based on the extended scaling is proposed.

  19. Linking density functional and mode coupling models for supercooled liquids

    Energy Technology Data Exchange (ETDEWEB)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P. [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067 (India)

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  20. Linking density functional and mode coupling models for supercooled liquids.

    Science.gov (United States)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  1. Predicting moisture content and density distribution of Scots pine by microwave scanning of sawn timber

    International Nuclear Information System (INIS)

    Johansson, J.; Hagman, O.; Fjellner, B.A.

    2003-01-01

    This study was carried out to investigate the possibility of calibrating a prediction model for the moisture content and density distribution of Scots pine (Pinus sylvestris) using microwave sensors. The material was initially of green moisture content and was thereafter dried in several steps to zero moisture content. At each step, all the pieces were weighed, scanned with a microwave sensor (Satimo 9,4GHz), and computed tomography (CT)-scanned with a medical CT scanner (Siemens Somatom AR.T.). The output variables from the microwave sensor were used as predictors, and CT images that correlated with known moisture content were used as response variables. Multivariate models to predict average moisture content and density were calibrated using the partial least squares (PLS) regression. The models for average moisture content and density were applied at the pixel level, and the distribution was visualized. The results show that it is possible to predict both moisture content distribution and density distribution with high accuracy using microwave sensors. (author)

  2. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  3. Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different

    Directory of Open Access Journals (Sweden)

    Keisuke Yano

    2014-05-01

    Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.

  4. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... signal based on a process model, coping with constraints on inputs and ... paper, we will present an introduction to the theory and application of MPC with Matlab codes ... section 5 presents the simulation results and section 6.

  5. Ab initio, density functional theory, and continuum solvation model prediction of the product ratio in the S(N)2 reaction of NO2(-) with CH3CH2Cl and CH3CH2Br in DMSO solution.

    Science.gov (United States)

    Westphal, Eduard; Pliego, Josefredo R

    2007-10-11

    The reaction pathways for the interaction of the nitrite ion with ethyl chloride and ethyl bromide in DMSO solution were investigated at the ab initio level of theory, and the solvent effect was included through the polarizable continuum model. The performance of BLYP, GLYP, XLYP, OLYP, PBE0, B3PW91, B3LYP, and X3LYP density functionals has been tested. For the ethyl bromide case, our best ab initio calculations at the CCSD(T)/aug-cc-pVTZ level predicts product ratio of 73% and 27% for nitroethane and ethyl nitrite, respectively, which can be compared with the experimental values of 67% and 33%. This translates to an error in the relative DeltaG* of only 0.17 kcal mol(-1). No functional is accurate (deviation X3LYP functional presents the best performance with deviation 0.82 kcal mol(-1). The present problem should be included in the test set used for the evaluation of new functionals.

  6. CT Measured Psoas Density Predicts Outcomes After Enterocutaneous Fistula Repair

    Science.gov (United States)

    Lo, Wilson D.; Evans, David C.; Yoo, Taehwan

    2018-01-01

    Background Low muscle mass and quality are associated with poor surgical outcomes. We evaluated CT measured psoas muscle density as a marker of muscle quality and physiologic reserve, and hypothesized that it would predict outcomes after enterocutaneous fistula repair (ECF). Methods We conducted a retrospective cohort study of patients 18 – 90 years old with ECF failing non-operative management requiring elective operative repair at Ohio State University from 2005 – 2016 that received a pre-operative abdomen/pelvis CT with intravenous contrast within 3 months of their operation. Psoas Hounsfield Unit average calculation (HUAC) were measured at the L3 level. 1 year leak rate, 90 day, 1 year, and 3 year mortality, complication risk, length of stay, dependent discharge, and 30 day readmission were compared to HUAC. Results 100 patients met inclusion criteria. Patients were stratified into interquartile (IQR) ranges based on HUAC. The lowest HUAC IQR was our low muscle quality (LMQ) cutoff, and was associated with 1 year leak (OR 3.50, p < 0.01), 1 year (OR 2.95, p < 0.04) and 3 year mortality (OR 3.76, p < 0.01), complication risk (OR 14.61, p < 0.01), and dependent discharge (OR 4.07, p < 0.01) compared to non-LMQ patients. Conclusions Psoas muscle density is a significant predictor of poor outcomes in ECF repair. This readily available measure of physiologic reserve can identify patients with ECF on pre-operative evaluation that have significantly increased risk that may benefit from additional interventions and recovery time to mitigate risk before operative repair. PMID:29505144

  7. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  9. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  10. Prediction of Five Softwood Paper Properties from its Density using Support Vector Machine Regression Techniques

    Directory of Open Access Journals (Sweden)

    Esperanza García-Gonzalo

    2016-01-01

    Full Text Available Predicting paper properties based on a limited number of measured variables can be an important tool for the industry. Mathematical models were developed to predict mechanical and optical properties from the corresponding paper density for some softwood papers using support vector machine regression with the Radial Basis Function Kernel. A dataset of different properties of paper handsheets produced from pulps of pine (Pinus pinaster and P. sylvestris and cypress species (Cupressus lusitanica, C. sempervirens, and C. arizonica beaten at 1000, 4000, and 7000 revolutions was used. The results show that it is possible to obtain good models (with high coefficient of determination with two variables: the numerical variable density and the categorical variable species.

  11. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  12. Improved water density feedback model for pressurized water reactors

    International Nuclear Information System (INIS)

    Casadei, A.L.

    1976-01-01

    An improved water density feedback model has been developed for neutron diffusion calculations of PWR cores. This work addresses spectral effects on few-group cross sections due to water density changes, and water density predictions considering open channel and subcooled boiling effects. An homogenized spectral model was also derived using the unit assembly diffusion method for employment in a coarse mesh 3D diffusion computer program. The spectral and water density evaluation models described were incorporated in a 3D diffusion code, and neutronic calculations for a typical PWR were completed for both nominal and accident conditions. Comparison of neutronic calculations employing the open versus the closed channel model for accident conditions indicates that significant safety margin increases can be obtained if subcooled boiling and open channel effects are considered in accident calculations. This is attributed to effects on both core reactivity and power distribution, which result in increased margin to fuel degradation limits. For nominal operating conditions, negligible differences in core reactivity and power distribution exist since flow redistribution and subcooled voids are not significant at such conditions. The results serve to confirm the conservatism of currently employed closed channel feedback methods in accident analysis, and indicate that the model developed in this work can contribute to show increased safety margins for certain accidents

  13. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  14. Teaching Chemistry with Electron Density Models

    Science.gov (United States)

    Shusterman, Gwendolyn P.; Shusterman, Alan J.

    1997-07-01

    Linus Pauling once said that a topic must satisfy two criteria before it can be taught to students. First, students must be able to assimilate the topic within a reasonable amount of time. Second, the topic must be relevant to the educational needs and interests of the students. Unfortunately, the standard general chemistry textbook presentation of "electronic structure theory", set as it is in the language of molecular orbitals, has a difficult time satisfying either criterion. Many of the quantum mechanical aspects of molecular orbitals are too difficult for most beginning students to appreciate, much less master, and the few applications that are presented in the typical textbook are too limited in scope to excite much student interest. This article describes a powerful new method for teaching students about electronic structure and its relevance to chemical phenomena. This method, which we have developed and used for several years in general chemistry (G.P.S.) and organic chemistry (A.J.S.) courses, relies on computer-generated three-dimensional models of electron density distributions, and largely satisfies Pauling's two criteria. Students find electron density models easy to understand and use, and because these models are easily applied to a broad range of topics, they successfully convey to students the importance of electronic structure. In addition, when students finally learn about orbital concepts they are better prepared because they already have a well-developed three-dimensional picture of electronic structure to fall back on. We note in this regard that the types of models we use have found widespread, rigorous application in chemical research (1, 2), so students who understand and use electron density models do not need to "unlearn" anything before progressing to more advanced theories.

  15. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  16. Prediction of two-phase mixture density using artificial neural networks

    International Nuclear Information System (INIS)

    Lombardi, C.; Mazzola, A.

    1997-01-01

    In nuclear power plants, the density of boiling mixtures has a significant relevance due to its influence on the neutronic balance, the power distribution and the reactor dynamics. Since the determination of the two-phase mixture density on a purely analytical basis is in fact impractical in many situations of interest, heuristic relationships have been developed based on the parameters describing the two-phase system. However, the best or even a good structure for the correlation cannot be determined in advance, also considering that it is usually desired to represent the experimental data with the most compact equation. A possible alternative to empirical correlations is the use of artificial neural networks, which allow one to model complex systems without requiring the explicit formulation of the relationships existing among the variables. In this work, the neural network methodology was applied to predict the density data of two-phase mixtures up-flowing in adiabatic channels under different experimental conditions. The trained network predicts the density data with a root-mean-square error of 5.33%, being ∼ 93% of the data points predicted within 10%. When compared with those of two conventional well-proven correlations, i.e. the Zuber-Findlay and the CISE correlations, the neural network performances are significantly better. In spite of the good accuracy of the neural network predictions, the 'black-box' characteristic of the neural model does not allow an easy physical interpretation of the knowledge integrated in the network weights. Therefore, the neural network methodology has the advantage of not requiring a formal correlation structure and of giving very accurate results, but at the expense of a loss of model transparency. (author)

  17. Influence of mesh density, cortical thickness and material properties on human rib fracture prediction.

    Science.gov (United States)

    Li, Zuoping; Kindig, Matthew W; Subit, Damien; Kent, Richard W

    2010-11-01

    The purpose of this paper was to investigate the sensitivity of the structural responses and bone fractures of the ribs to mesh density, cortical thickness, and material properties so as to provide guidelines for the development of finite element (FE) thorax models used in impact biomechanics. Subject-specific FE models of the second, fourth, sixth and tenth ribs were developed to reproduce dynamic failure experiments. Sensitivity studies were then conducted to quantify the effects of variations in mesh density, cortical thickness, and material parameters on the model-predicted reaction force-displacement relationship, cortical strains, and bone fracture locations for all four ribs. Overall, it was demonstrated that rib FE models consisting of 2000-3000 trabecular hexahedral elements (weighted element length 2-3mm) and associated quadrilateral cortical shell elements with variable thickness more closely predicted the rib structural responses and bone fracture force-failure displacement relationships observed in the experiments (except the fracture locations), compared to models with constant cortical thickness. Further increases in mesh density increased computational cost but did not markedly improve model predictions. A ±30% change in the major material parameters of cortical bone lead to a -16.7 to 33.3% change in fracture displacement and -22.5 to +19.1% change in the fracture force. The results in this study suggest that human rib structural responses can be modeled in an accurate and computationally efficient way using (a) a coarse mesh of 2000-3000 solid elements, (b) cortical shells elements with variable thickness distribution and (c) a rate-dependent elastic-plastic material model. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Sparse Density, Leaf-Off Airborne Laser Scanning Data in Aboveground Biomass Component Prediction

    Directory of Open Access Journals (Sweden)

    Ville Kankare

    2015-05-01

    Full Text Available The demand for cost-efficient forest aboveground biomass (AGB prediction methods is growing worldwide. The National Land Survey of Finland (NLS began collecting airborne laser scanning (ALS data throughout Finland in 2008 to provide a new high-detailed terrain elevation model. Similar data sets are being collected in an increasing number of countries worldwide. These data sets offer great potential in forest mapping related applications. The objectives of our study were (i to evaluate the AGB component prediction accuracy at a resolution of 300 m2 using sparse density, leaf-off ALS data (collected by NLS derived metrics as predictor variables; (ii to compare prediction accuracies with existing large-scale forest mapping techniques (Multi-source National Forest Inventory, MS-NFI based on Landsat TM satellite imagery; and (iii to evaluate the accuracy and effect of canopy height model (CHM derived metrics on AGB component prediction when ALS data were acquired with multiple sensors and varying scanning parameters. Results showed that ALS point metrics can be used to predict component AGBs with an accuracy of 29.7%–48.3%. AGB prediction accuracy was slightly improved using CHM-derived metrics but CHM metrics had a more clear effect on the estimated bias. Compared to the MS-NFI, the prediction accuracy was considerably higher, which was caused by differences in the remote sensing data utilized.

  19. Nuclear ``pasta'' phase within density dependent hadronic models

    Science.gov (United States)

    Avancini, S. S.; Brito, L.; Marinelli, J. R.; Menezes, D. P.; de Moraes, M. M. W.; Providência, C.; Santos, A. M.

    2009-03-01

    In the present paper, we investigate the onset of the “pasta” phase with different parametrizations of the density dependent hadronic model and compare the results with one of the usual parametrizations of the nonlinear Walecka model. The influence of the scalar-isovector virtual δ meson is shown. At zero temperature, two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature, only the coexistence phases method is used. npe matter with fixed proton fractions and in β equilibrium are studied. We compare our results with restrictions imposed on the values of the density and pressure at the inner edge of the crust, obtained from observations of the Vela pulsar and recent isospin diffusion data from heavy-ion reactions, and with predictions from spinodal calculations.

  20. Nuclear 'pasta' phase within density dependent hadronic models

    International Nuclear Information System (INIS)

    Avancini, S. S.; Marinelli, J. R.; Menezes, D. P.; Moraes, M. M. W. de; Brito, L.; Providencia, C.; Santos, A. M.

    2009-01-01

    In the present paper, we investigate the onset of the 'pasta' phase with different parametrizations of the density dependent hadronic model and compare the results with one of the usual parametrizations of the nonlinear Walecka model. The influence of the scalar-isovector virtual δ meson is shown. At zero temperature, two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature, only the coexistence phases method is used. npe matter with fixed proton fractions and in β equilibrium are studied. We compare our results with restrictions imposed on the values of the density and pressure at the inner edge of the crust, obtained from observations of the Vela pulsar and recent isospin diffusion data from heavy-ion reactions, and with predictions from spinodal calculations

  1. Modelling interactions of toxicants and density dependence in wildlife populations

    Science.gov (United States)

    Schipper, Aafke M.; Hendriks, Harrie W.M.; Kauffman, Matthew J.; Hendriks, A. Jan; Huijbregts, Mark A.J.

    2013-01-01

    1. A major challenge in the conservation of threatened and endangered species is to predict population decline and design appropriate recovery measures. However, anthropogenic impacts on wildlife populations are notoriously difficult to predict due to potentially nonlinear responses and interactions with natural ecological processes like density dependence. 2. Here, we incorporated both density dependence and anthropogenic stressors in a stage-based matrix population model and parameterized it for a density-dependent population of peregrine falcons Falco peregrinus exposed to two anthropogenic toxicants [dichlorodiphenyldichloroethylene (DDE) and polybrominated diphenyl ethers (PBDEs)]. Log-logistic exposure–response relationships were used to translate toxicant concentrations in peregrine falcon eggs to effects on fecundity. Density dependence was modelled as the probability of a nonbreeding bird acquiring a breeding territory as a function of the current number of breeders. 3. The equilibrium size of the population, as represented by the number of breeders, responded nonlinearly to increasing toxicant concentrations, showing a gradual decrease followed by a relatively steep decline. Initially, toxicant-induced reductions in population size were mitigated by an alleviation of the density limitation, that is, an increasing probability of territory acquisition. Once population density was no longer limiting, the toxicant impacts were no longer buffered by an increasing proportion of nonbreeders shifting to the breeding stage, resulting in a strong decrease in the equilibrium number of breeders. 4. Median critical exposure concentrations, that is, median toxicant concentrations in eggs corresponding with an equilibrium population size of zero, were 33 and 46 μg g−1 fresh weight for DDE and PBDEs, respectively. 5. Synthesis and applications. Our modelling results showed that particular life stages of a density-limited population may be relatively insensitive to

  2. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  3. High-Density Lipoprotein Cholesterol, Blood Urea Nitrogen, and Serum Creatinine Can Predict Severe Acute Pancreatitis.

    Science.gov (United States)

    Hong, Wandong; Lin, Suhan; Zippi, Maddalena; Geng, Wujun; Stock, Simon; Zimmer, Vincent; Xu, Chunfang; Zhou, Mengtao

    2017-01-01

    Early prediction of disease severity of acute pancreatitis (AP) would be helpful for triaging patients to the appropriate level of care and intervention. The aim of the study was to develop a model able to predict Severe Acute Pancreatitis (SAP). A total of 647 patients with AP were enrolled. The demographic data, hematocrit, High-Density Lipoprotein Cholesterol (HDL-C) determinant at time of admission, Blood Urea Nitrogen (BUN), and serum creatinine (Scr) determinant at time of admission and 24 hrs after hospitalization were collected and analyzed statistically. Multivariate logistic regression indicated that HDL-C at admission and BUN and Scr at 24 hours (hrs) were independently associated with SAP. A logistic regression function (LR model) was developed to predict SAP as follows: -2.25-0.06 HDL-C (mg/dl) at admission + 0.06 BUN (mg/dl) at 24 hours + 0.66 Scr (mg/dl) at 24 hours. The optimism-corrected c-index for LR model was 0.832 after bootstrap validation. The area under the receiver operating characteristic curve for LR model for the prediction of SAP was 0.84. The LR model consists of HDL-C at admission and BUN and Scr at 24 hours, representing an additional tool to stratify patients at risk of SAP.

  4. Current density and continuity in discretized models

    International Nuclear Information System (INIS)

    Boykin, Timothy B; Luisier, Mathieu; Klimeck, Gerhard

    2010-01-01

    Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schroedinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying discrete models, students can encounter conceptual difficulties with the representation of the current and its divergence because different finite-difference expressions, all of which reduce to the current density in the continuous limit, measure different physical quantities. Understanding these different discrete currents is essential and requires a careful analysis of the current operator, the divergence of the current and the continuity equation. Here we develop point forms of the current and its divergence valid for an arbitrary mesh and basis. We show that in discrete models currents exist only along lines joining atomic sites (or mesh points). Using these results, we derive a discrete analogue of the divergence theorem and demonstrate probability conservation in a purely localized-basis approach.

  5. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  6. Model SM-1 ballast density gauge

    International Nuclear Information System (INIS)

    Gao Weixiang; Fang Jidong; Zhang Xuejuan; Zhang Reilin; Gao Wanshan

    1990-05-01

    The ballast density is one of the principal parameters for roadbed operating state. It greatly affects the railroad stability, the accumulation of railroad residual deformation and the amount of work for railroad maintenance. SM-1 ballast density gauge is designed to determine the density of ballast by using the effect of γ-ray passed through the ballast. Its fundamentals, construction, specifications, application and economic profit are described

  7. Prediction of crack density and electrical resistance changes in indium tin oxide/polymer thin films under tensile loading

    KAUST Repository

    Mora Cordova, Angel

    2014-06-11

    We present unified predictions for the crack onset strain, evolution of crack density, and changes in electrical resistance in indium tin oxide/polymer thin films under tensile loading. We propose a damage mechanics model to quantify and predict such changes as an alternative to fracture mechanics formulations. Our predictions are obtained by assuming that there are no flaws at the onset of loading as opposed to the assumptions of fracture mechanics approaches. We calibrate the crack onset strain and the damage model based on experimental data reported in the literature. We predict crack density and changes in electrical resistance as a function of the damage induced in the films. We implement our model in the commercial finite element software ABAQUS using a user subroutine UMAT. We obtain fair to good agreement with experiments. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Transverse charge and magnetization densities: Improved chiral predictions down to b=1 fms

    Energy Technology Data Exchange (ETDEWEB)

    Alarcon, Jose Manuel [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Hiller Blin, Astrid N. [Johannes Gutenberg Univ., Mainz (Germany); Vicente Vacas, Manuel J. [Spanish National Research Council (CSIC), Valencia (Spain). Univ. of Valencia (UV), Inst. de Fisica Corpuscular; Weiss, Christian [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2018-03-01

    The transverse charge and magnetization densities provide insight into the nucleon’s inner structure. In the periphery, the isovector components are clearly dominant, and can be computed in a model-independent way by means of a combination of chiral effective field theory (cEFT) and dispersion analysis. With a novel N=D method, we incorporate the pion electromagnetic formfactor data into the cEFT calculation, thus taking into account the pion-rescattering effects and r-meson pole. As a consequence, we are able to reliably compute the densities down to distances b1 fm, therefore achieving a dramatic improvement of the results compared to traditional cEFT calculations, while remaining predictive and having controlled uncertainties.

  9. Nuclear symmetry energy in density dependent hadronic models

    International Nuclear Information System (INIS)

    Haddad, S.

    2008-12-01

    The density dependence of the symmetry energy and the correlation between parameters of the symmetry energy and the neutron skin thickness in the nucleus 208 Pb are investigated in relativistic Hadronic models. The dependency of the symmetry energy on density is linear around saturation density. Correlation exists between the neutron skin thickness in the nucleus 208 Pb and the value of the nuclear symmetry energy at saturation density, but not with the slope of the symmetry energy at saturation density. (author)

  10. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  11. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    Science.gov (United States)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  12. Influence of thermal buoyancy on vertical tube bundle thermal density head predictions under transient conditions

    International Nuclear Information System (INIS)

    Lin, H.C.; Kasza, K.E.

    1984-01-01

    The thermal-hydraulic behavior of an LMFBR system under various types of plant transients is usually studied using one-dimensional (1-D) flow and energy transport models of the system components. Many of the transient events involve the change from a high to a low flow with an accompanying change in temperature of the fluid passing through the components which can be conductive to significant thermal bouyancy forces. Thermal bouyancy can exert its influence on system dynamic energy transport predictions through alterations of flow and thermal distributions which in turn can influence decay heat removal, system-response time constants, heat transport between primary and secondary systems, and thermal energy rejection at the reactor heat sink, i.e., the steam generator. In this paper the results from a comparison of a 1-D model prediction and experimental data for vertical tube bundle overall thermal density head and outlet temperature under transient conditions causing varying degrees of thermal bouyancy are presented. These comparisons are being used to generate insight into how, when, and to what degree thermal buoyancy can cause departures from 1-D model predictions

  13. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  14. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  15. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  16. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  17. Modelling of density limit phenomena in toroidal helical plasmas

    International Nuclear Information System (INIS)

    Itoh, Kimitaka; Itoh, Sanae-I.

    2001-01-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the Wendelstein 7-AS (W7-AS) stellarator. (author)

  18. Modelling of density limit phenomena in toroidal helical plasmas

    International Nuclear Information System (INIS)

    Itoh, K.; Itoh, S.-I.

    2000-03-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the W7-AS stellarator. (author)

  19. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  20. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    Science.gov (United States)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-04-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

  1. Toxicity prediction of ionic liquids based on Daphnia magna by using density functional theory

    Science.gov (United States)

    Nu’aim, M. N.; Bustam, M. A.

    2018-04-01

    By using a model called density functional theory, the toxicity of ionic liquids can be predicted and forecast. It is a theory that allowing the researcher to have a substantial tool for computation of the quantum state of atoms, molecules and solids, and molecular dynamics which also known as computer simulation method. It can be done by using structural feature based quantum chemical reactivity descriptor. The identification of ionic liquids and its Log[EC50] data are from literature data that available in Ismail Hossain thesis entitled “Synthesis, Characterization and Quantitative Structure Toxicity Relationship of Imidazolium, Pyridinium and Ammonium Based Ionic Liquids”. Each cation and anion of the ionic liquids were optimized and calculated. The geometry optimization and calculation from the software, produce the value of highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO). From the value of HOMO and LUMO, the value for other toxicity descriptors were obtained according to their formulas. The toxicity descriptor that involves are electrophilicity index, HOMO, LUMO, energy gap, chemical potential, hardness and electronegativity. The interrelation between the descriptors are being determined by using a multiple linear regression (MLR). From this MLR, all descriptors being analyzed and the descriptors that are significant were chosen. In order to develop the finest model equation for toxicity prediction of ionic liquids, the selected descriptors that are significant were used. The validation of model equation was performed with the Log[EC50] data from the literature and the final model equation was developed. A bigger range of ionic liquids which nearly 108 of ionic liquids can be predicted from this model equation.

  2. Predictions of new AB O3 perovskite compounds by combining machine learning and density functional theory

    Science.gov (United States)

    Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex

    2018-04-01

    We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to

  3. A Weakly Nonlinear Model for the Damping of Resonantly Forced Density Waves in Dense Planetary Rings

    Science.gov (United States)

    Lehmann, Marius; Schmidt, Jürgen; Salo, Heikki

    2016-10-01

    In this paper, we address the stability of resonantly forced density waves in dense planetary rings. Goldreich & Tremaine have already argued that density waves might be unstable, depending on the relationship between the ring’s viscosity and the surface mass density. In the recent paper Schmidt et al., we have pointed out that when—within a fluid description of the ring dynamics—the criterion for viscous overstability is satisfied, forced spiral density waves become unstable as well. In this case, linear theory fails to describe the damping, but nonlinearity of the underlying equations guarantees a finite amplitude and eventually a damping of the wave. We apply the multiple scale formalism to derive a weakly nonlinear damping relation from a hydrodynamical model. This relation describes the resonant excitation and nonlinear viscous damping of spiral density waves in a vertically integrated fluid disk with density dependent transport coefficients. The model consistently predicts density waves to be (linearly) unstable in a ring region where the conditions for viscous overstability are met. Sufficiently far away from the Lindblad resonance, the surface mass density perturbation is predicted to saturate to a constant value due to nonlinear viscous damping. The wave’s damping lengths of the model depend on certain input parameters, such as the distance to the threshold for viscous overstability in parameter space and the ground state surface mass density.

  4. Intrinsic Density Matrices of the Nuclear Shell Model

    International Nuclear Information System (INIS)

    Deveikis, A.; Kamuntavichius, G.

    1996-01-01

    A new method for calculation of shell model intrinsic density matrices, defined as two-particle density matrices integrated over the centre-of-mass position vector of two last particles and complemented with isospin variables, has been developed. The intrinsic density matrices obtained are completely antisymmetric, translation-invariant, and do not employ a group-theoretical classification of antisymmetric states. They are used for exact realistic density matrix expansion within the framework of the reduced Hamiltonian method. The procedures based on precise arithmetic for calculation of the intrinsic density matrices that involve no numerical diagonalization or orthogonalization have been developed and implemented in the computer code. (author). 11 refs., 2 tabs

  5. Bulk Density Prediction for Histosols and Soil Horizons with High Organic Matter Content

    Directory of Open Access Journals (Sweden)

    Sidinei Julio Beutler

    Full Text Available ABSTRACT Bulk density (Bd can easily be predicted from other data using pedotransfer functions (PTF. The present study developed two PTFs (PTF1 and PTF2 for Bd prediction in Brazilian organic soils and horizons and compared their performance with nine previously published equations. Samples of 280 organic soil horizons used to develop PTFs and containing at least 80 g kg-1 total carbon content (TOC were obtained from different regions of Brazil. The multiple linear stepwise regression technique was applied to validate all the equations using an independent data set. Data were transformed using Box-Cox to meet the assumptions of the regression models. For validation of PTF1 and PTF2, the coefficient of determination (R2 was 0.47 and 0.37, mean error -0.04 and 0.10, and root mean square error 0.22 and 0.26, respectively. The best performance was obtained for the PTF1, PTF2, Hollis, and Honeysett equations. The PTF1 equation is recommended when clay content data are available, but considering that they are scarce for organic soils, the PTF2, Hollis, and Honeysett equations are the most suitable because they use TOC as a predictor variable. Considering the particular characteristics of organic soils and the environmental context in which they are formed, the equations developed showed good accuracy in predicting Bd compared with already existing equations.

  6. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  7. A thermodynamic model for aqueous solutions of liquid-like density

    Energy Technology Data Exchange (ETDEWEB)

    Pitzer, K.S.

    1987-06-01

    The paper describes a model for the prediction of the thermodynamic properties of multicomponent aqueous solutions and discusses its applications. The model was initially developed for solutions near room temperature, but has been found to be applicable to aqueous systems up to 300/sup 0/C or slightly higher. A liquid-like density and relatively small compressibility are assumed. A typical application is the prediction of the equilibrium between an aqueous phase (brine) and one or more solid phases (minerals). (ACR)

  8. Predictive integrated modelling for ITER scenarios

    International Nuclear Information System (INIS)

    Artaud, J.F.; Imbeaux, F.; Aniel, T.; Basiuk, V.; Eriksson, L.G.; Giruzzi, G.; Hoang, G.T.; Huysmans, G.; Joffrin, E.; Peysson, Y.; Schneider, M.; Thomas, P.

    2005-01-01

    The uncertainty on the prediction of ITER scenarios is evaluated. 2 transport models which have been extensively validated against the multi-machine database are used for the computation of the transport coefficients. The first model is GLF23, the second called Kiauto is a model in which the profile of dilution coefficient is a gyro Bohm-like analytical function, renormalized in order to get profiles consistent with a given global energy confinement scaling. The package of codes CRONOS is used, it gives access to the dynamics of the discharge and allows the study of interplay between heat transport, current diffusion and sources. The main motivation of this work is to study the influence of parameters such plasma current, heat, density, impurities and toroidal moment transport. We can draw the following conclusions: 1) the target Q = 10 can be obtained in ITER hybrid scenario at I p = 13 MA, using either the DS03 two terms scaling or the GLF23 model based on the same pedestal; 2) I p = 11.3 MA, Q = 10 can be reached only assuming a very peaked pressure profile and a low pedestal; 3) at fixed Greenwald fraction, Q increases with density peaking; 4) achieving a stationary q-profile with q > 1 requires a large non-inductive current fraction (80%) that could be provided by 20 to 40 MW of LHCD; and 5) owing to the high temperature the q-profile penetration is delayed and q = 1 is reached about 600 s in ITER hybrid scenario at I p = 13 MA, in the absence of active q-profile control. (A.C.)

  9. Conditional density estimation using fuzzy GARCH models

    NARCIS (Netherlands)

    Almeida, R.J.; Bastürk, N.; Kaymak, U.; Costa Sousa, da J.M.; Kruse, R.; Berthold, M.R.; Moewes, C.; Gil, M.A.; Grzegorzewski, P.; Hryniewicz, O.

    2013-01-01

    Abstract. Time series data exhibits complex behavior including non-linearity and path-dependency. This paper proposes a flexible fuzzy GARCH model that can capture different properties of data, such as skewness, fat tails and multimodality in one single model. Furthermore, additional information and

  10. NOx, Soot, and Fuel Consumption Predictions under Transient Operating Cycle for Common Rail High Power Density Diesel Engines

    Directory of Open Access Journals (Sweden)

    N. H. Walke

    2016-01-01

    Full Text Available Diesel engine is presently facing the challenge of controlling NOx and soot emissions on transient cycles, to meet stricter emission norms and to control emissions during field operations. Development of a simulation tool for NOx and soot emissions prediction on transient operating cycles has become the most important objective, which can significantly reduce the experimentation time and cost required for tuning these emissions. Hence, in this work, a 0D comprehensive predictive model has been formulated with selection and coupling of appropriate combustion and emissions models to engine cycle models. Selected combustion and emissions models are further modified to improve their prediction accuracy in the full operating zone. Responses of the combustion and emissions models have been validated for load and “start of injection” changes. Model predicted transient fuel consumption, air handling system parameters, and NOx and soot emissions are in good agreement with measured data on a turbocharged high power density common rail engine for the “nonroad transient cycle” (NRTC. It can be concluded that 0D models can be used for prediction of transient emissions on modern engines. How the formulated approach can also be extended to transient emissions prediction for other applications and fuels is also discussed.

  11. Prediction of Reduction Potentials of Copper Proteins with Continuum Electrostatics and Density Functional Theory.

    Science.gov (United States)

    Fowler, Nicholas J; Blanford, Christopher F; Warwicker, Jim; de Visser, Sam P

    2017-11-02

    Blue copper proteins, such as azurin, show dramatic changes in Cu 2+ /Cu + reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high-level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long-range electrostatic changes and hence can be modeled accurately with continuum electrostatics. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  12. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  13. Whole-brain grey matter density predicts balance stability irrespective of age and protects older adults from falling.

    Science.gov (United States)

    Boisgontier, Matthieu P; Cheval, Boris; van Ruitenbeek, Peter; Levin, Oron; Renaud, Olivier; Chanal, Julien; Swinnen, Stephan P

    2016-03-01

    Functional and structural imaging studies have demonstrated the involvement of the brain in balance control. Nevertheless, how decisive grey matter density and white matter microstructural organisation are in predicting balance stability, and especially when linked to the effects of ageing, remains unclear. Standing balance was tested on a platform moving at different frequencies and amplitudes in 30 young and 30 older adults, with eyes open and with eyes closed. Centre of pressure variance was used as an indicator of balance instability. The mean density of grey matter and mean white matter microstructural organisation were measured using voxel-based morphometry and diffusion tensor imaging, respectively. Mixed-effects models were built to analyse the extent to which age, grey matter density, and white matter microstructural organisation predicted balance instability. Results showed that both grey matter density and age independently predicted balance instability. These predictions were reinforced when the level of difficulty of the conditions increased. Furthermore, grey matter predicted balance instability beyond age and at least as consistently as age across conditions. In other words, for balance stability, the level of whole-brain grey matter density is at least as decisive as being young or old. Finally, brain grey matter appeared to be protective against falls in older adults as age increased the probability of losing balance in older adults with low, but not moderate or high grey matter density. No such results were observed for white matter microstructural organisation, thereby reinforcing the specificity of our grey matter findings. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Modelling CO2-Brine Interfacial Tension using Density Gradient Theory

    KAUST Repository

    Ruslan, Mohd Fuad Anwari Che

    2018-03-01

    Knowledge regarding carbon dioxide (CO2)-brine interfacial tension (IFT) is important for petroleum industry and Carbon Capture and Storage (CCS) strategies. In petroleum industry, CO2-brine IFT is especially importance for CO2 – based enhanced oil recovery strategy as it affects phase behavior and fluid transport in porous media. CCS which involves storing CO2 in geological storage sites also requires understanding regarding CO2-brine IFT as this parameter affects CO2 quantity that could be securely stored in the storage site. Several methods have been used to compute CO2-brine interfacial tension. One of the methods employed is by using Density Gradient Theory (DGT) approach. In DGT model, IFT is computed based on the component density distribution across the interface. However, current model is only applicable for modelling low to medium ionic strength solution. This limitation is due to the model only considers the increase of IFT due to the changes of bulk phases properties and does not account for ion distribution at interface. In this study, a new modelling strategy to compute CO2-brine IFT based on DGT was proposed. In the proposed model, ion distribution across interface was accounted for by separating the interface to two sections. The saddle point of tangent plane distance where ( ) was defined as the boundary separating the two sections of the interface. Electrolyte is assumed to be present only in the second section which is connected to the bulk liquid phase side. Numerical simulations were performed using the proposed approach for single and mixed salt solutions for three salts (NaCl, KCl, and CaCl2), for temperature (298 K to 443 K), pressure (2 MPa to 70 MPa), and ionic strength (0.085 mol·kg-1 to 15 mol·kg-1). The simulation result shows that the tuned model was able to predict with good accuracy CO2-brine IFT for all studied cases. Comparison with current DGT model showed that the proposed approach yields better match with the experiment data

  15. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  16. Experimental measurements and prediction of liquid densities for n-alkane mixtures

    International Nuclear Information System (INIS)

    Ramos-Estrada, Mariana; Iglesias-Silva, Gustavo A.; Hall, Kenneth R.

    2006-01-01

    We present experimental liquid densities for n-pentane, n-hexane and n-heptane and their binary mixtures from (273.15 to 363.15) K over the entire composition range (for the mixtures) at atmospheric pressure. A vibrating tube densimeter produces the experimental densities. Also, we present a generalized correlation to predict the liquid densities of n-alkanes and their mixtures. We have combined the principle of congruence with the Tait equation to obtain an equation that uses as variables: temperature, pressure and the equivalent carbon number of the mixture. Also, we present a generalized correlation for the atmospheric liquid densities of n-alkanes. The average absolute percentage deviation of this equation from the literature experimental density values is 0.26%. The Tait equation has an average percentage deviation of 0.15% from experimental density measurements

  17. Habitat-Based Density Models for Three Cetacean Species off Southern California Illustrate Pronounced Seasonal Differences

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2017-05-01

    Full Text Available Managing marine species effectively requires spatially and temporally explicit knowledge of their density and distribution. Habitat-based density models, a type of species distribution model (SDM that uses habitat covariates to estimate species density and distribution patterns, are increasingly used for marine management and conservation because they provide a tool for assessing potential impacts (e.g., from fishery bycatch, ship strikes, anthropogenic sound over a variety of spatial and temporal scales. The abundance and distribution of many pelagic species exhibit substantial seasonal variability, highlighting the importance of predicting density specific to the season of interest. This is particularly true in dynamic regions like the California Current, where significant seasonal shifts in cetacean distribution have been documented at coarse scales. Finer scale (10 km habitat-based density models were previously developed for many cetacean species occurring in this region, but most models were limited to summer/fall. The objectives of our study were two-fold: (1 develop spatially-explicit density estimates for winter/spring to support management applications, and (2 compare model-predicted density and distribution patterns to previously developed summer/fall model results in the context of species ecology. We used a well-established Generalized Additive Modeling framework to develop cetacean SDMs based on 20 California Cooperative Oceanic Fisheries Investigations (CalCOFI shipboard surveys conducted during winter and spring between 2005 and 2015. Models were fit for short-beaked common dolphin (Delphinus delphis delphis, Dall's porpoise (Phocoenoides dalli, and humpback whale (Megaptera novaeangliae. Model performance was evaluated based on a variety of established metrics, including the percentage of explained deviance, ratios of observed to predicted density, and visual inspection of predicted and observed distributions. Final models were

  18. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  19. Multivariate Density Modeling for Retirement Finance

    OpenAIRE

    Rook, Christopher J.

    2017-01-01

    Prior to the financial crisis mortgage securitization models increased in sophistication as did products built to insure against losses. Layers of complexity formed upon a foundation that could not support it and as the foundation crumbled the housing market followed. That foundation was the Gaussian copula which failed to correctly model failure-time correlations of derivative securities in duress. In retirement, surveys suggest the greatest fear is running out of money and as retirement dec...

  20. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  1. Prediction of density limit disruptions on the J-TEXT tokamak

    International Nuclear Information System (INIS)

    Wang, S Y; Chen, Z Y; Huang, D W; Tong, R H; Yan, W; Wei, Y N; Ma, T K; Zhang, M; Zhuang, G

    2016-01-01

    Disruption mitigation is essential for the next generation of tokamaks. The prediction of plasma disruption is the key to disruption mitigation. A neural network combining eight input signals has been developed to predict the density limit disruptions on the J-TEXT tokamak. An optimized training method has been proposed which has improved the prediction performance. The network obtained has been tested on 64 disruption shots and 205 non-disruption shots. A successful alarm rate of 82.8% with a false alarm rate of 12.3% can be achieved at 4.8 ms prior to the current spike of the disruption. It indicates that more physical parameters than the current physical scaling should be considered for predicting the density limit. It was also found that the critical density for disruption can be predicted several tens of milliseconds in advance in most cases. Furthermore, if the network is used for real-time density feedback control, more than 95% of the density limit disruptions can be avoided by setting a proper threshold. (paper)

  2. Modeling of branching density and branching distribution in low-density polyethylene polymerization

    NARCIS (Netherlands)

    Kim, D.M.; Iedema, P.D.

    2008-01-01

    Low-density polyethylene (ldPE) is a general purpose polymer with various applications. By this reason, many publications can be found on the ldPE polymerization modeling. However, scission reaction and branching distribution are only recently considered in the modeling studies due to difficulties

  3. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  4. Axial asymmetry of excited heavy nuclei as essential feature for the prediction of level densities

    Energy Technology Data Exchange (ETDEWEB)

    Grosse, Eckart [Institute of Nuclear and Particle Physics, Technische Universitaet Dresden (Germany); Junghans, Arnd R. [Institute of Radiation Physics, Helmholtz-Zentrum Dresden-Rossendorf (Germany); Massarczyk, Ralph [Los Alamos National Laboratory, New Mexico (United States)

    2016-07-01

    In previous studies a considerable improvement of predictions for neutron resonance spacings by a modified back-shifted Fermi-gas model (BSFM) was found. The modifications closely follow the basic principles for a gas of weakly bound Fermions as given in text books of statistical physics: (1) Phase transition at a temperature defined by theory, (2) pairing condensation independent of A, and (3) proportionality of entropy to temperature (and thus the level density parameter) fixed by the Fermi energy. For finite nuclei we add: (4) the back-shift energy is defined by shell correction and (5) the collective enhancement is enlarged by allowing the axial symmetry to be broken. Nearly no parameter fitting is needed to arrive at a good reproduction of level density information obtained by various methods for a number of nuclei in a wide range of A and E. To that end the modified BSFM is complemented by a constant temperature approximation below the phase transition point. The axial symmetry breaking (5), which is an evidently essential feature, will also be regarded with respect to other observables for heavy nuclei.

  5. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  6. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  7. Density contrast indicators in cosmological dust models

    Indian Academy of Sciences (India)

    contrast, which may or may not be monotonically increasing with time. We also find that monotonic- ity seems to be related to the initial conditions of the model, which may be of potential interest in connection with debates regarding gravitational entropy and the arrow of time. 1. Introduction. An important question in ...

  8. Current Density and Continuity in Discretized Models

    Science.gov (United States)

    Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard

    2010-01-01

    Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…

  9. Model FT631 moisture/density combined gauge

    International Nuclear Information System (INIS)

    Ji Changsong; Dai Zhude; Zhang Jianguo; Zhang Enshang; Huang Jiling; Meng Qingbao

    1990-01-01

    Model FT631 Moisture/Density Combined Gauge has been developed, with which both water content and density, the two parameters of measured medium (soil), are obtained in one act of measurement at the same time. A China patent has been taken for this invention

  10. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  11. Osteoporosis risk prediction for bone mineral density assessment of postmenopausal women using machine learning.

    Science.gov (United States)

    Yoo, Tae Keun; Kim, Sung Kean; Kim, Deok Won; Choi, Joon Yul; Lee, Wan Hyung; Oh, Ein; Park, Eun-Cheol

    2013-11-01

    A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women compared to the ability of conventional clinical decision tools. We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Examination Surveys. The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests, artificial neural networks (ANN), and logistic regression (LR) based on simple surveys. The machine learning models were compared to four conventional clinical decision tools: osteoporosis self-assessment tool (OST), osteoporosis risk assessment instrument (ORAI), simple calculated osteoporosis risk estimation (SCORE), and osteoporosis index of risk (OSIRIS). SVM had significantly better area under the curve (AUC) of the receiver operating characteristic than ANN, LR, OST, ORAI, SCORE, and OSIRIS for the training set. SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0% at total hip, femoral neck, or lumbar spine for the testing set. The significant factors selected by SVM were age, height, weight, body mass index, duration of menopause, duration of breast feeding, estrogen therapy, hyperlipidemia, hypertension, osteoarthritis, and diabetes mellitus. Considering various predictors associated with low bone density, the machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.

  12. Evaluation of the Troxler Model 4640 Thin Lift Nuclear Density Gauge. Research report (Interim)

    International Nuclear Information System (INIS)

    Solaimanian, M.; Holmgreen, R.J.; Kennedy, T.W.

    1990-07-01

    The report describes the results of a research study to determine the effectiveness of the Troxler Model 4640 Thin Lift Nuclear Density Gauge. The densities obtained from cores and the nuclear density gauge from seven construction projects were compared. The projects were either newly constructed or under construction when the tests were performed. A linear regression technique was used to investigate how well the core densities could be predicted from nuclear densities. Correlation coefficients were determined to indicate the degree of correlation between the core and nuclear densities. Using a statistical analysis technique, the range of the mean difference between core and nuclear measurements was established for specified confidence levels for each project. Analysis of the data indicated that the accuracy of the gauge is material dependent. While relatively acceptable results were obtained with limestone mixtures, the gauge did not perform satisfactorily with mixtures containing siliceous aggregate

  13. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  14. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  15. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  16. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  17. Social Inclusion Predicts Lower Blood Glucose and Low-Density Lipoproteins in Healthy Adults.

    Science.gov (United States)

    Floyd, Kory; Veksler, Alice E; McEwan, Bree; Hesse, Colin; Boren, Justin P; Dinsmore, Dana R; Pavlich, Corey A

    2017-08-01

    Loneliness has been shown to have direct effects on one's personal well-being. Specifically, a greater feeling of loneliness is associated with negative mental health outcomes, negative health behaviors, and an increased likelihood of premature mortality. Using the neuroendocrine hypothesis, we expected social inclusion to predict decreases in both blood glucose levels and low-density lipoproteins (LDLs) and increases in high-density lipoproteins (HDLs). Fifty-two healthy adults provided self-report data for social inclusion and blood samples for hematological tests. Results indicated that higher social inclusion predicted lower levels of blood glucose and LDL, but had no effect on HDL. Implications for theory and practice are discussed.

  18. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  19. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  20. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  1. Spatially explicit modeling of lesser prairie-chicken lek density in Texas

    Science.gov (United States)

    Timmer, Jennifer M.; Butler, M.J.; Ballard, Warren; Boal, Clint W.; Whitlaw, Heather A.

    2014-01-01

    As with many other grassland birds, lesser prairie-chickens (Tympanuchus pallidicinctus) have experienced population declines in the Southern Great Plains. Currently they are proposed for federal protection under the Endangered Species Act. In addition to a history of land-uses that have resulted in habitat loss, lesser prairie-chickens now face a new potential disturbance from energy development. We estimated lek density in the occupied lesser prairie-chicken range of Texas, USA, and modeled anthropogenic and vegetative landscape features associated with lek density. We used an aerial line-transect survey method to count lesser prairie-chicken leks in spring 2010 and 2011 and surveyed 208 randomly selected 51.84-km(2) blocks. We divided each survey block into 12.96-km(2) quadrats and summarized landscape variables within each quadrat. We then used hierarchical distance-sampling models to examine the relationship between lek density and anthropogenic and vegetative landscape features and predict how lek density may change in response to changes on the landscape, such as an increase in energy development. Our best models indicated lek density was related to percent grassland, region (i.e., the northeast or southwest region of the Texas Panhandle), total percentage of grassland and shrubland, paved road density, and active oil and gas well density. Predicted lek density peaked at 0.39leks/12.96km(2) (SE=0.09) and 2.05leks/12.96km(2) (SE=0.56) in the northeast and southwest region of the Texas Panhandle, respectively, which corresponds to approximately 88% and 44% grassland in the northeast and southwest region. Lek density increased with an increase in total percentage of grassland and shrubland and was greatest in areas with lower densities of paved roads and lower densities of active oil and gas wells. We used the 2 most competitive models to predict lek abundance and estimated 236 leks (CV=0.138, 95% CI=177-306leks) for our sampling area. Our results suggest that

  2. Chemical theory and modelling through density across length scales

    International Nuclear Information System (INIS)

    Ghosh, Swapan K.

    2016-01-01

    One of the concepts that has played a major role in the conceptual as well as computational developments covering all the length scales of interest in a number of areas of chemistry, physics, chemical engineering and materials science is the concept of single-particle density. Density functional theory has been a versatile tool for the description of many-particle systems across length scales. Thus, in the microscopic length scale, an electron density based description has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. Density concept has been used in the form of single particle number density in the intermediate mesoscopic length scale to obtain an appropriate picture of the equilibrium and dynamical processes, dealing with a wide class of problems involving interfacial science and soft condensed matter. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related property density functions has been found to be quite appropriate. The basic ideas underlying the versatile uses of the concept of density in the theory and modelling of materials and phenomena, as visualized across length scales, along with selected illustrative applications to some recent areas of research on hydrogen energy, soft matter, nucleation phenomena, isotope separation, and separation of mixture in condensed phase, will form the subject matter of the talk. (author)

  3. Updated climatological model predictions of ionospheric and HF propagation parameters

    International Nuclear Information System (INIS)

    Reilly, M.H.; Rhoads, F.J.; Goodman, J.M.; Singh, M.

    1991-01-01

    The prediction performances of several climatological models, including the ionospheric conductivity and electron density model, RADAR C, and Ionospheric Communications Analysis and Predictions Program, are evaluated for different regions and sunspot number inputs. Particular attention is given to the near-real-time (NRT) predictions associated with single-station updates. It is shown that a dramatic improvement can be obtained by using single-station ionospheric data to update the driving parameters for an ionospheric model for NRT predictions of f(0)F2 and other ionospheric and HF circuit parameters. For middle latitudes, the improvement extends out thousands of kilometers from the update point to points of comparable corrected geomagnetic latitude. 10 refs

  4. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  5. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

      The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  6. Bayesian modeling of the mass and density of asteroids

    Science.gov (United States)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  7. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  8. The prediction of epidemics through mathematical modeling.

    Science.gov (United States)

    Schaus, Catherine

    2014-01-01

    Mathematical models may be resorted to in an endeavor to predict the development of epidemics. The SIR model is one of the applications. Still too approximate, the use of statistics awaits more data in order to come closer to reality.

  9. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  10. Low bone mineral density in noncholestatic liver cirrhosis: prevalence, severity and prediction

    Directory of Open Access Journals (Sweden)

    Figueiredo Fátima Aparecida Ferreira

    2003-01-01

    Full Text Available BACKGROUND: Metabolic bone disease has long been associated with cholestatic disorders. However, data in noncholestatic cirrhosis are relatively scant. AIMS: To determine prevalence and severity of low bone mineral density in noncholestatic cirrhosis and to investigate whether age, gender, etiology, severity of underlying liver disease, and/or laboratory tests are predictive of the diagnosis. PATIENTS/METHODS: Between March and September/1998, 89 patients with noncholestatic cirrhosis and 20 healthy controls were enrolled in a cross-sectional study. All subjects underwent standard laboratory tests and bone densitometry at lumbar spine and femoral neck by dual X-ray absorptiometry. RESULTS: Bone mass was significantly reduced at both sites in patients compared to controls. The prevalence of low bone mineral density in noncholestatic cirrhosis, defined by the World Health Organization criteria, was 78% at lumbar spine and 71% at femoral neck. Bone density significantly decreased with age at both sites, especially in patients older than 50 years. Bone density was significantly lower in post-menopausal women patients compared to pre-menopausal and men at both sites. There was no significant difference in bone mineral density among noncholestatic etiologies. Lumbar spine bone density significantly decreased with the progression of liver dysfunction. No biochemical variable was significantly associated with low bone mineral density. CONCLUSIONS: Low bone mineral density is highly prevalent in patients with noncholestatic cirrhosis. Older patients, post-menopausal women and patients with severe hepatic dysfunction experienced more advanced bone disease. The laboratory tests routinely determined in patients with liver disease did not reliably predict low bone mineral density.

  11. Viscosity and Liquid Density of Asymmetric n-Alkane Mixtures: Measurement and Modelling

    DEFF Research Database (Denmark)

    Queimada, António J.; Marrucho, Isabel M.; Coutinho, João A.P.

    2005-01-01

    Viscosity and liquid density Measurements were performed, at atmospheric pressure. in pure and mixed n-decane. n-eicosane, n-docosane, and n-tetracosane from 293.15 K (or above the melting point) up to 343.15 K. The viscosity was determined with a rolling ball viscometer and liquid densities...... with a vibrating U-tube densimeter. Pure component results agreed, oil average, with literature values within 0.2% for liquid density and 3% for viscosity. The measured data were used to evaluate the performance of two models for their predictions: the friction theory coupled with the Peng-Robinson equation...... of state and a corresponding states model recently proposed for surface tension, viscosity, vapor pressure, and liquid densities of the series of n-alkanes. Advantages and shortcoming of these models are discussed....

  12. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  13. Predicting soil particle density from clay and soil organic matter contents

    DEFF Research Database (Denmark)

    Schjønning, Per; McBride, R.A.; Keller, T.

    2017-01-01

    Soil particle density (Dp) is an important soil property for calculating soil porosity expressions. However, many studies assume a constant value, typically 2.65Mgm−3 for arable, mineral soils. Fewmodels exist for the prediction of Dp from soil organic matter (SOM) content. We hypothesized...

  14. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  15. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  16. Stochastic transport models for mixing in variable-density turbulence

    Science.gov (United States)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  17. Populational Growth Models Proportional to Beta Densities with Allee Effect

    Science.gov (United States)

    Aleixo, Sandra M.; Rocha, J. Leonel; Pestana, Dinis D.

    2009-05-01

    We consider populations growth models with Allee effect, proportional to beta densities with shape parameters p and 2, where the dynamical complexity is related with the Malthusian parameter r. For p>2, these models exhibit a population dynamics with natural Allee effect. However, in the case of 1models do not include this effect. In order to inforce it, we present some alternative models and investigate their dynamics, presenting some important results.

  18. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  19. Acoustic Velocity and Attenuation in Magnetorhelogical fluids based on an effective density fluid model

    Directory of Open Access Journals (Sweden)

    Shen Min

    2016-01-01

    Full Text Available Magnetrohelogical fluids (MRFs represent a class of smart materials whose rheological properties change in response to the magnetic field, which resulting in the drastic change of the acoustic impedance. This paper presents an acoustic propagation model that approximates a fluid-saturated porous medium as a fluid with a bulk modulus and effective density (EDFM to study the acoustic propagation in the MRF materials under magnetic field. The effective density fluid model derived from the Biot’s theory. Some minor changes to the theory had to be applied, modeling both fluid-like and solid-like state of the MRF material. The attenuation and velocity variation of the MRF are numerical calculated. The calculated results show that for the MRF material the attenuation and velocity predicted with this effective density fluid model are close agreement with the previous predictions by Biot’s theory. We demonstrate that for the MRF material acoustic prediction the effective density fluid model is an accurate alternative to full Biot’s theory and is much simpler to implement.

  20. Global asymptotic stability of density dependent integral population projection models.

    Science.gov (United States)

    Rebarber, Richard; Tenhumberg, Brigitte; Townley, Stuart

    2012-02-01

    Many stage-structured density dependent populations with a continuum of stages can be naturally modeled using nonlinear integral projection models. In this paper, we study a trichotomy of global stability result for a class of density dependent systems which include a Platte thistle model. Specifically, we identify those systems parameters for which zero is globally asymptotically stable, parameters for which there is a positive asymptotically stable equilibrium, and parameters for which there is no asymptotically stable equilibrium. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  2. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  3. Classical density functional theory & simulations on a coarse-grained model of aromatic ionic liquids.

    Science.gov (United States)

    Turesson, Martin; Szparaga, Ryan; Ma, Ke; Woodward, Clifford E; Forsman, Jan

    2014-05-14

    A new classical density functional approach is developed to accurately treat a coarse-grained model of room temperature aromatic ionic liquids. Our major innovation is the introduction of charge-charge correlations, which are treated in a simple phenomenological way. We test this theory on a generic coarse-grained model for aromatic RTILs with oligomeric forms for both cations and anions, approximating 1-alkyl-3-methyl imidazoliums and BF₄⁻, respectively. We find that predictions by the new density functional theory for fluid structures at charged surfaces are very accurate, as compared with molecular dynamics simulations, across a range of surface charge densities and lengths of the alkyl chain. Predictions of interactions between charged surfaces are also presented.

  4. The prediction of cyclic proximal humerus fracture fixation failure by various bone density measures.

    Science.gov (United States)

    Varga, Peter; Grünwald, Leonard; Windolf, Markus

    2018-02-22

    Fixation of osteoporotic proximal humerus fractures has remained challenging, but may be improved by careful pre-operative planning. The aim of this study was to investigate how well the failure of locking plate fixation of osteoporotic proximal humerus fractures can be predicted by bone density measures assessed with currently available clinical imaging (realistic case) and a higher resolution and quality modality (theoretical best-case). Various density measures were correlated to experimentally assessed number of cycles to construct failure of plated unstable low-density proximal humerus fractures (N = 18). The influence of density evaluation technique was investigated by comparing local (peri-implant) versus global evaluation regions; HR-pQCT-based versus clinical QCT-based image data; ipsilateral versus contralateral side; and bone mineral content (BMC) versus bone mineral density (BMD). All investigated density measures were significantly correlated with the experimental cycles to failure. The best performing clinically feasible parameter was the QCT-based BMC of the contralateral articular cap region, providing significantly better correlation (R 2  = 0.53) compared to a previously proposed clinical density measure (R 2  = 0.30). BMC had consistently, but not significantly stronger correlations with failure than BMD. The overall best results were obtained with the ipsilateral HR-pQCT-based local BMC (R 2  = 0.74) that may be used for implant optimization. Strong correlations were found between the corresponding density measures of the two CT image sources, as well as between the two sides. Future studies should investigate if BMC of the contralateral articular cap region could provide improved prediction of clinical fixation failure compared to previously proposed measures. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  5. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  6. Molecular Model for HNBR with Tunable Cross-Link Density.

    Science.gov (United States)

    Molinari, N; Khawaja, M; Sutton, A P; Mostofi, A A

    2016-12-15

    We introduce a chemically inspired, all-atom model of hydrogenated nitrile butadiene rubber (HNBR) and assess its performance by computing the mass density and glass-transition temperature as a function of cross-link density in the structure. Our HNBR structures are created by a procedure that mimics the real process used to produce HNBR, that is, saturation of the carbon-carbon double bonds in NBR, either by hydrogenation or by cross-linking. The atomic interactions are described by the all-atom "Optimized Potentials for Liquid Simulations" (OPLS-AA). In this paper, first, we assess the use of OPLS-AA in our models, especially using NBR bulk properties, and second, we evaluate the validity of the proposed model for HNBR by investigating mass density and glass transition as a function of the tunable cross-link density. Experimental densities are reproduced within 3% for both elastomers, and qualitatively correct trends in the glass-transition temperature as a function of monomer composition and cross-link density are obtained.

  7. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  8. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  9. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  10. Void fraction prediction in two-phase flows independent of the liquid phase density changes

    International Nuclear Information System (INIS)

    Nazemi, E.; Feghhi, S.A.H.; Roshani, G.H.

    2014-01-01

    Gamma-ray densitometry is a frequently used non-invasive method to determine void fraction in two-phase gas liquid pipe flows. Performance of flow meters using gamma-ray attenuation depends strongly on the fluid properties. Variations of the fluid properties such as density in situations where temperature and pressure fluctuate would cause significant errors in determination of the void fraction in two-phase flows. A conventional solution overcoming such an obstacle is periodical recalibration which is a difficult task. This paper presents a method based on dual modality densitometry using Artificial Neural Network (ANN), which offers the advantage of measuring the void fraction independent of the liquid phase changes. An experimental setup was implemented to generate the required input data for training the network. ANNs were trained on the registered counts of the transmission and scattering detectors in different liquid phase densities and void fractions. Void fractions were predicted by ANNs with mean relative error of less than 0.45% in density variations range of 0.735 up to 0.98 gcm −3 . Applying this method would improve the performance of two-phase flow meters and eliminates the necessity of periodical recalibration. - Highlights: • Void fraction was predicted independent of density changes. • Recorded counts of detectors/void fraction were used as inputs/output of ANN. • ANN eliminated necessity of recalibration in changeable density of two-phase flows

  11. A density functional theory based approach for predicting melting points of ionic liquids.

    Science.gov (United States)

    Chen, Lihua; Bryantsev, Vyacheslav S

    2017-02-01

    Accurate prediction of melting points of ILs is important both from the fundamental point of view and from the practical perspective for screening ILs with low melting points and broadening their utilization in a wider temperature range. In this work, we present an ab initio approach to calculate melting points of ILs with known crystal structures and illustrate its application for a series of 11 ILs containing imidazolium/pyrrolidinium cations and halide/polyatomic fluoro-containing anions. The melting point is determined as a temperature at which the Gibbs free energy of fusion is zero. The Gibbs free energy of fusion can be expressed through the use of the Born-Fajans-Haber cycle via the lattice free energy of forming a solid IL from gaseous phase ions and the sum of the solvation free energies of ions comprising IL. Dispersion-corrected density functional theory (DFT) involving (semi)local (PBE-D3) and hybrid exchange-correlation (HSE06-D3) functionals is applied to estimate the lattice enthalpy, entropy, and free energy. The ions solvation free energies are calculated with the SMD-generic-IL solvation model at the M06-2X/6-31+G(d) level of theory under standard conditions. The melting points of ILs computed with the HSE06-D3 functional are in good agreement with the experimental data, with a mean absolute error of 30.5 K and a mean relative error of 8.5%. The model is capable of accurately reproducing the trends in melting points upon variation of alkyl substituents in organic cations and replacement one anion by another. The results verify that the lattice energies of ILs containing polyatomic fluoro-containing anions can be approximated reasonably well using the volume-based thermodynamic approach. However, there is no correlation of the computed lattice energies with molecular volume for ILs containing halide anions. Moreover, entropies of solid ILs follow two different linear relationships with molecular volume for halides and polyatomic fluoro

  12. Integrated predictive modelling simulations of burning plasma experiment designs

    International Nuclear Information System (INIS)

    Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H

    2003-01-01

    Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied

  13. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  14. Platelet density per monocyte predicts adverse events in patients after percutaneous coronary intervention.

    Science.gov (United States)

    Rutten, Bert; Roest, Mark; McClellan, Elizabeth A; Sels, Jan W; Stubbs, Andrew; Jukema, J Wouter; Doevendans, Pieter A; Waltenberger, Johannes; van Zonneveld, Anton-Jan; Pasterkamp, Gerard; De Groot, Philip G; Hoefer, Imo E

    2016-01-01

    Monocyte recruitment to damaged endothelium is enhanced by platelet binding to monocytes and contributes to vascular repair. Therefore, we studied whether the number of platelets per monocyte affects the recurrence of adverse events in patients after percutaneous coronary intervention (PCI). Platelet-monocytes complexes with high and low median fluorescence intensities (MFI) of the platelet marker CD42b were isolated using cell sorting. Microscopic analysis revealed that a high platelet marker MFI on monocytes corresponded with a high platelet density per monocyte while a low platelet marker MFI corresponded with a low platelet density per monocyte (3.4 ± 0.7 vs 1.4 ± 0.1 platelets per monocyte, P=0.01). Using real-time video microscopy, we observed increased recruitment of high platelet density monocytes to endothelial cells as compared with low platelet density monocytes (P=0.01). Next, we classified PCI scheduled patients (N=263) into groups with high, medium and low platelet densities per monocyte and assessed the recurrence of adverse events. After multivariate adjustment for potential confounders, we observed a 2.5-fold reduction in the recurrence of adverse events in patients with a high platelet density per monocyte as compared with a low platelet density per monocyte [hazard ratio=0.4 (95% confidence interval, 0.2-0.8), P=0.01]. We show that a high platelet density per monocyte increases monocyte recruitment to endothelial cells and predicts a reduction in the recurrence of adverse events in patients after PCI. These findings may imply that a high platelet density per monocyte protects against recurrence of adverse events.

  15. Calculation of the effects of pumping, divertor configuration and fueling on density limit in a tokamak model problem

    International Nuclear Information System (INIS)

    Stacey, W. M.

    2001-01-01

    Several series of model problem calculations have been performed to investigate the predicted effect of pumping, divertor configuration and fueling on the maximum achievable density in diverted tokamaks. Density limitations due to thermal instabilities (confinement degradation and multifaceted axisymmetric radiation from the edge) and to divertor choking are considered. For gas fueling the maximum achievable density is relatively insensitive to pumping (on or off), to the divertor configuration (open or closed), or to the location of the gas injection, although the gas fueling rate required to achieve this maximum achievable density is quite sensitive to these choices. Thermal instabilities are predicted to limit the density at lower values than divertor choking. Higher-density limits are predicted for pellet injection than for gas fueling

  16. Charge and transition densities of samarium isotopes in the interacting Boson model

    International Nuclear Information System (INIS)

    Moinester, M.A.; Alster, J.; Dieperink, A.E.L.

    1982-01-01

    The interacting boson approximation (IBA) model has been used to interpret the ground-state charge distributions and lowest 2 + transition charge densities of the even samarium isotopes for A = 144-154. Phenomenological boson transition densities associated with the nucleons comprising the s-and d-bosons of the IBA were determined via a least squares fit analysis of charge and transition densities in the Sm isotopes. The application of these boson trasition densities to higher excited 0 + and 2 + states of Sm, and to 0 + and 2 + transitions in neighboring nuclei, such as Nd and Gd, is described. IBA predictions for the transition densities of the three lowest 2 + levels of 154 Gd are given and compared to theoretical transition densities based on Hartree-Fock calculations. The deduced quadrupole boson transition densities are in fair agreement with densities derived previously from 150 Nd data. It is also shown how certain moments of the best fit boson transition densities can simply and sucessfully describe rms radii, isomer shifts, B(E2) strengths, and transition radii for the Sm isotopes. (orig.)

  17. Osteoprotegerin autoantibodies do not predict low bone mineral density in middle-aged women.

    Science.gov (United States)

    Vaziri-Sani, Fariba; Brundin, Charlotte; Agardh, Daniel

    2017-12-01

    Autoantibodies against osteoprotegerin (OPG) have been associated with osteoporosis. The aim was to develop an immunoassay for OPG autoantibodies and test their diagnostic usefulness of identifying women general population with low bone mineral density. Included were 698 women at mean age 55.1 years (range 50.4-60.6) randomly selected from the general population. Measurement of wrist bone mineral density (g/cm 2 ) was performed of the non-dominant wrist by dual-energy X-ray absorptiometry (DXA). A T-score density. Measurements of OPG autoantibodies were carried by radiobinding assays. Cut-off levels for a positive value were determined from the deviation from normality in the distribution of 398 healthy blood donors representing the 99.7th percentile. Forty-five of the 698 (6.6%) women were IgG-OPG positive compared with 2 of 398 (0.5%) controls ( p  density between IgG-OPG positive (median 0.439 (range 0.315-0.547) g/cm 2 ) women and IgG-OPG negative (median 0.435 (range 0.176-0.652) g/cm 2 ) women ( p  = 0.3956). Furthermore, there was neither a correlation between IgG-OPG levels and bone mineral density (r s  = 0.1896; p  = 0.2068) nor T-score (r s  = 0.1889; p  = 0.2086). Diagnostic sensitivity and specificity of IgG-OPG for low bone mineral density were 5.7% and 92.9%, and positive and negative predictive values were 7.4% and 90.8%, respectively. Elevated OPG autoantibody levels do not predict low bone mineral density in middle-aged women selected from the general population.

  18. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  19. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  20. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  1. A mass-density model can account for the size-weight illusion.

    Science.gov (United States)

    Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.

  2. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  3. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  4. Modeling relaxation length and density of acacia mangium wood using gamma - ray attenuation technique

    International Nuclear Information System (INIS)

    Tamer A Tabet; Fauziah Abdul Aziz

    2009-01-01

    Wood density measurement is related to the several factors that influence wood quality. In this paper, density, relaxation length and half-thickness value of eight ages, 3, 5, 7, 10, 11, 13 and 15 year-old of Acacia mangium wood were determined using gamma radiation from 137 Cs source. Results show that Acacia mangium tree of age 3 year has the highest relaxation length of 83.33 cm and least density of 0.43 gcm -3 , while the tree of age 15 year has the least Relaxation length of 28.56 cm and highest density of 0.76 gcm -3 . Results also show that the 3 year-old Acacia mangium wood has the highest half thickness value of 57.75 cm and 15 year-old tree has the least half thickness value of 19.85 cm. Two mathematical models have been developed for the prediction of density, variation with relaxation length and half-thickness value of different age of tree. A good agreement (greater than 85% in most cases) was observed between the measured values and predicted ones. Very good linear correlation was found between measured density and the age of tree (R2 = 0.824), and between estimated density and Acacia mangium tree age (R2 = 0.952). (Author)

  5. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  6. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  7. Re-examining Prostate-specific Antigen (PSA) Density: Defining the Optimal PSA Range and Patients for Using PSA Density to Predict Prostate Cancer Using Extended Template Biopsy.

    Science.gov (United States)

    Jue, Joshua S; Barboza, Marcelo Panizzutti; Prakash, Nachiketh S; Venkatramani, Vivek; Sinha, Varsha R; Pavan, Nicola; Nahar, Bruno; Kanabur, Pratik; Ahdoot, Michael; Dong, Yan; Satyanarayana, Ramgopal; Parekh, Dipen J; Punnen, Sanoj

    2017-07-01

    To compare the predictive accuracy of prostate-specific antigen (PSA) density vs PSA across different PSA ranges and by prior biopsy status in a prospective cohort undergoing prostate biopsy. Men from a prospective trial underwent an extended template biopsy to evaluate for prostate cancer at 26 sites throughout the United States. The area under the receiver operating curve assessed the predictive accuracy of PSA density vs PSA across 3 PSA ranges (10 ng/mL). We also investigated the effect of varying the PSA density cutoffs on the detection of cancer and assessed the performance of PSA density vs PSA in men with or without a prior negative biopsy. Among 1290 patients, 585 (45%) and 284 (22%) men had prostate cancer and significant prostate cancer, respectively. PSA density performed better than PSA in detecting any prostate cancer within a PSA of 4-10 ng/mL (area under the receiver operating characteristic curve [AUC]: 0.70 vs 0.53, P PSA >10 mg/mL (AUC: 0.84 vs 0.65, P PSA density was significantly more predictive than PSA in detecting any prostate cancer in men without (AUC: 0.73 vs 0.67, P PSA increases, PSA density becomes a better marker for predicting prostate cancer compared with PSA alone. Additionally, PSA density performed better than PSA in men with a prior negative biopsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Modelling the effect of autotoxicity on density-dependent phytotoxicity.

    Science.gov (United States)

    Sinkkonen, A

    2007-01-21

    An established method to separate resource competition from chemical interference is cultivation of monospecific, even-aged stands. The stands grow at several densities and they are exposed to homogenously spread toxins. Hence, the dose received by individual plants is inversely related to stand density. This results in distinguishable alterations in dose-response slopes. The method is often recommended in ecological studies of allelopathy. However, many plant species are known to release autotoxic compounds. Often, the probability of autotoxicity increases as sowing density increases. Despite this, the possibility of autotoxicity is ignored when experiments including monospecific stands are designed and when their results are evaluated. In this paper, I model mathematically how autotoxicity changes the outcome of dose-response slopes as different densities of monospecific stands are grown on homogenously phytotoxic substrata. Several ecologically reasonable relations between plant density and autotoxin exposure are considered over a range of parameter values, and similarities between different relations are searched for. The models indicate that autotoxicity affects the outcome of density-dependent dose-response experiments. Autotoxicity seems to abolish the effects of other phytochemicals in certain cases, while it may augment them in other cases. Autotoxicity may alter the outcome of tests using the method of monospecific stands even if the dose of autotoxic compounds per plant is a fraction of the dose of non-autotoxic phytochemicals with similar allelopathic potential. Data from the literature support these conclusions. A faulty null hypothesis may be accepted if the autotoxic potential of a test species is overlooked in density-response experiments. On the contrary, if test species are known to be non-autotoxic, the method of monospecific stands does not need fine-tuning. The results also suggest that the possibility of autotoxicity should be investigated in

  9. Radiomic modeling of BI-RADS density categories

    Science.gov (United States)

    Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir

    2017-03-01

    Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.

  10. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  11. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  3. Novel Associations between Common Breast Cancer Susceptibility Variants and Risk-Predicting Mammographic Density Measures

    OpenAIRE

    Stone, Jennifer; Thompson, Deborah J.; dos-Santos-Silva, Isabel; Scott, Christopher; Tamimi, Rulla M.; Lindstrom, Sara; Kraft, Peter; Hazra, Aditi; Li, Jingmei; Eriksson, Louise; Czene, Kamila; Hall, Per; Jensen, Matt; Cunningham, Julie; Olson, Janet E.

    2015-01-01

    Mammographic density measures adjusted for age and body mass index (BMI) are heritable predictors of breast cancer risk but few mammographic density-associated genetic variants have been identified. Using data for 10,727 women from two international consortia, we estimated associations between 77 common breast cancer susceptibility variants and absolute dense area, percent dense area and absolute non-dense area adjusted for study, age and BMI using mixed linear modeling. We found strong suppo...

  4. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  5. Assessment of Nucleation Site Density Models for CFD Simulations of Subcooled Flow Boiling

    International Nuclear Information System (INIS)

    Hoang, N. H.; Chu, I. C.; Euh, D. J.; Song, C. H.

    2015-01-01

    The framework of a CFD simulation of subcooled flow boiling basically includes a block of wall boiling models communicating with governing equations of a two-phase flow via parameters like temperature, rate of phasic change, etc. In the block of wall boiling models, a heat flux partitioning model, which describes how the heat is taken away from a heated surface, is combined with models quantifying boiling parameters, i.e. nucleation site density, and bubble departure diameter and frequency. It is realized that the nucleation site density is an important parameter for predicting the subcooled flow boiling. The number of nucleation sites per unit area decides the influence region of each heat transfer mechanism. The variation of the nucleation site density will mutually change the dynamics of vapor bubbles formed at these sites. In addition, the nucleation site density is needed as one initial and boundary condition to solve the interfacial area transport equation. A lot of effort has been devoted to mathematically formulate the nucleation site density. As a consequence, numerous correlations of the nucleation site density are available in the literature. These correlations are commonly quite different in their mathematical form as well as application range. Some correlations of the nucleation site density have been applied successfully to CFD simulations of several specific subcooled boiling flows, but in combination with different correlations of the bubble departure diameter and frequency. In addition, the values of the nucleation site density, and bubble departure diameter and frequency obtained from simulations for a same problem are relatively different, depending on which models are used, even when global characteristics, e.g., void fraction and mean bubble diameter, agree well with experimental values. It is realized that having a good CFD simulations of the subcooled flow boiling requires a detailed validations of all the models used. Owing to the importance

  6. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...... pressure, is also presented. The combination of the mass characterization scheme presented in this work and the f-theory, can also deliver accurate viscosity modeling results. Additionally, depending on how extensive the compositional characterization is, the approach,presented in this work may also...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection....

  7. Spent fuel: prediction model development

    International Nuclear Information System (INIS)

    Almassy, M.Y.; Bosi, D.M.; Cantley, D.A.

    1979-07-01

    The need for spent fuel disposal performance modeling stems from a requirement to assess the risks involved with deep geologic disposal of spent fuel, and to support licensing and public acceptance of spent fuel repositories. Through the balanced program of analysis, diagnostic testing, and disposal demonstration tests, highlighted in this presentation, the goal of defining risks and of quantifying fuel performance during long-term disposal can be attained

  8. Navy Recruit Attrition Prediction Modeling

    Science.gov (United States)

    2014-09-01

    have high correlation with attrition, such as age, job characteristics, command climate, marital status, behavior issues prior to recruitment, and the...the additive model. glm(formula = Outcome ~ Age + Gender + Marital + AFQTCat + Pay + Ed + Dep, family = binomial, data = ltraining) Deviance ...0.1 ‘ ‘ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance : 105441 on 85221 degrees of freedom Residual deviance

  9. Theoretical prediction of low-density hexagonal ZnO hollow structures

    Energy Technology Data Exchange (ETDEWEB)

    Tuoc, Vu Ngoc, E-mail: tuoc.vungoc@hust.edu.vn [Institute of Engineering Physics, Hanoi University of Science and Technology, 1 Dai Co Viet Road, Hanoi (Viet Nam); Huan, Tran Doan [Institute of Materials Science, University of Connecticut, Storrs, Connecticut 06269-3136 (United States); Thao, Nguyen Thi [Institute of Engineering Physics, Hanoi University of Science and Technology, 1 Dai Co Viet Road, Hanoi (Viet Nam); Hong Duc University, 307 Le Lai, Thanh Hoa City (Viet Nam); Tuan, Le Manh [Hong Duc University, 307 Le Lai, Thanh Hoa City (Viet Nam)

    2016-10-14

    Along with wurtzite and zinc blende, zinc oxide (ZnO) has been found in a large number of polymorphs with substantially different properties and, hence, applications. Therefore, predicting and synthesizing new classes of ZnO polymorphs are of great significance and have been gaining considerable interest. Herein, we perform a density functional theory based tight-binding study, predicting several new series of ZnO hollow structures using the bottom-up approach. The geometry of the building blocks allows for obtaining a variety of hexagonal, low-density nanoporous, and flexible ZnO hollow structures. Their stability is discussed by means of the free energy computed within the lattice-dynamics approach. Our calculations also indicate that all the reported hollow structures are wide band gap semiconductors in the same fashion with bulk ZnO. The electronic band structures of the ZnO hollow structures are finally examined in detail.

  10. Predictive Models and Computational Toxicology (II IBAMTOX)

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  11. Finding furfural hydrogenation catalysts via predictive modelling

    NARCIS (Netherlands)

    Strassberger, Z.; Mooijman, M.; Ruijter, E.; Alberts, A.H.; Maldonado, A.G.; Orru, R.V.A.; Rothenberg, G.

    2010-01-01

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes

  12. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.

  13. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna

    2009-01-01

    established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic

  14. Ground-State Gas-Phase Structures of Inorganic Molecules Predicted by Density Functional Theory Methods

    KAUST Repository

    Minenkov, Yury

    2017-11-29

    We tested a battery of density functional theory (DFT) methods ranging from generalized gradient approximation (GGA) via meta-GGA to hybrid meta-GGA schemes as well as Møller–Plesset perturbation theory of the second order and a single and double excitation coupled-cluster (CCSD) theory for their ability to reproduce accurate gas-phase structures of di- and triatomic molecules derived from microwave spectroscopy. We obtained the most accurate molecular structures using the hybrid and hybrid meta-GGA approximations with B3PW91, APF, TPSSh, mPW1PW91, PBE0, mPW1PBE, B972, and B98 functionals, resulting in lowest errors. We recommend using these methods to predict accurate three-dimensional structures of inorganic molecules when intramolecular dispersion interactions play an insignificant role. The structures that the CCSD method predicts are of similar quality although at considerably larger computational cost. The structures that GGA and meta-GGA schemes predict are less accurate with the largest absolute errors detected with BLYP and M11-L, suggesting that these methods should not be used if accurate three-dimensional molecular structures are required. Because of numerical problems related to the integration of the exchange–correlation part of the functional and large scattering of errors, most of the Minnesota models tested, particularly MN12-L, M11, M06-L, SOGGA11, and VSXC, are also not recommended for geometry optimization. When maintaining a low computational budget is essential, the nonseparable gradient functional N12 might work within an acceptable range of error. As expected, the DFT-D3 dispersion correction had a negligible effect on the internuclear distances when combined with the functionals tested on nonweakly bonded di- and triatomic inorganic molecules. By contrast, the dispersion correction for the APF-D functional has been found to shorten the bonds significantly, up to 0.064 Å (AgI), in Ag halides, BaO, BaS, BaF, BaCl, Cu halides, and Li and

  15. Expected packing density allows prediction of both amyloidogenic and disordered regions in protein chains

    Energy Technology Data Exchange (ETDEWEB)

    Galzitskaya, Oxana V; Garbuzynskiy, Sergiy O; Lobanov, Michail Yu [Institute of Protein Research, Russian Academy of Sciences, 142290, Pushchino, Moscow Region (Russian Federation)

    2007-07-18

    The determination of factors that influence conformational changes in proteins is very important for the identification of potentially amyloidogenic and disordered regions in polypeptide chains. In our work we introduce a new parameter, mean packing density, to detect both amyloidogenic and disordered regions in a protein sequence. It has been shown that regions with strong expected packing density are responsible for amyloid formation. Our predictions are consistent with known disease-related amyloidogenic regions for 9 of 12 amyloid-forming proteins and peptides in which the positions of amyloidogenic regions have been revealed experimentally. Our findings support the concept that the mechanism of formation of amyloid fibrils is similar for different peptides and proteins. Moreover, we have demonstrated that regions with weak expected packing density are responsible for the appearance of disordered regions. Our method has been tested on datasets of globular proteins and long disordered protein segments, and it shows improved performance over other widely used methods. Thus, we demonstrate that the expected packing density is a useful value for predicting both disordered and amyloidogenic regions of a protein based on sequence alone. Our results are important for understanding the structural characteristics of protein folding and misfolding.

  16. Exploring the Role of the Spatial Characteristics of Visible and Near-Infrared Reflectance in Predicting Soil Organic Carbon Density

    Directory of Open Access Journals (Sweden)

    Long Guo

    2017-10-01

    Full Text Available Soil organic carbon stock plays a key role in the global carbon cycle and the precision agriculture. Visible and near-infrared reflectance spectroscopy (VNIRS can directly reflect the internal physical construction and chemical substances of soil. The partial least squares regression (PLSR is a classical and highly commonly used model in constructing soil spectral models and predicting soil properties. Nevertheless, using PLSR alone may not consider soil as characterized by strong spatial heterogeneity and dependence. However, considering the spatial characteristics of soil can offer valuable spatial information to guarantee the prediction accuracy of soil spectral models. Thus, this study aims to construct a rapid and accurate soil spectral model in predicting soil organic carbon density (SOCD with the aid of the spatial autocorrelation of soil spectral reflectance. A total of 231 topsoil samples (0–30 cm were collected from the Jianghan Plain, Wuhan, China. The spectral reflectance (350–2500 nm was used as auxiliary variable. A geographically-weighted regression (GWR model was used to evaluate the potential improvement of SOCD prediction when the spatial information of the spectral features was considered. Results showed that: (1 The principal components extracted from PLSR have a strong relationship with the regression coefficients at the average sampling distance (300 m based on the Moran’s I values. (2 The eigenvectors of the principal components exhibited strong relationships with the absorption spectral features, and the regression coefficients of GWR varied with the geographical locations. (3 GWR displayed a higher accuracy than that of PLSR in predicting the SOCD by VNIRS. This study aimed to help people realize the importance of the spatial characteristics of soil properties and their spectra. This work also introduced guidelines for the application of GWR in predicting soil properties by VNIRS.

  17. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  18. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  19. Gravitational form factors and angular momentum densities in light-front quark-diquark model

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Narinder [Indian Institute of Technology Kanpur, Department of Physics, Kanpur (India); Mondal, Chandan [Chinese Academy of Sciences, Institute of Modern Physics, Lanzhou (China); Sharma, Neetika [I K Gujral Punjab Technical University, Department of Physical Sciences, Jalandhar, Punjab (India); Panjab University, Department of Physics, Chandigarh (India)

    2017-12-15

    We investigate the gravitational form factors (GFFs) and the longitudinal momentum densities (p{sup +} densities) for proton in a light-front quark-diquark model. The light-front wave functions are constructed from the soft-wall AdS/QCD prediction. The contributions from both the scalar and the axial vector diquarks are considered here. The results are compared with the consequences of a parametrization of nucleon generalized parton distributions (GPDs) in the light of recent MRST measurements of parton distribution functions (PDFs) and a soft-wall AdS/QCD model. The spatial distribution of angular momentum for up and down quarks inside the nucleon has been presented. At the density level, we illustrate different definitions of angular momentum explicitly for an up and down quark in the light-front quark-diquark model inspired by AdS/QCD. (orig.)

  20. Density-correlation functions in Calogero-Sutherland models

    International Nuclear Information System (INIS)

    Minahan, J.A.; Polychronakos, A.P.

    1994-01-01

    Using arguments from two-dimensional Yang-Mills theory and the collective coordinate formulation of the Calogero-Sutherland model, we conjecture the dynamical density-correlation function for coupling l and 1/l, where l is an integer. We present overwhelming evidence that the conjecture is indeed correct

  1. Density correlation functions in Calogero-Sutherland models

    CERN Document Server

    Minahan, Joseph A.; Joseph A Minahan; Alexios P Polychronakos

    1994-01-01

    Using arguments from two dimensional Yang-Mills theory and the collective coordinate formulation of the Calogero-Sutherland model, we conjecture the dynamical density correlation function for coupling l and 1/l, where l is an integer. We present overwhelming evidence that the conjecture is indeed correct.

  2. Absolute densities in exoplanetary systems. Photodynamical modelling of Kepler-138.

    Science.gov (United States)

    Almenara, J. M.; Díaz, R. F.; Dorn, C.; Bonfils, X.; Udry, S.

    2018-04-01

    In favourable conditions, the density of transiting planets in multiple systems can be determined from photometry data alone. Dynamical information can be extracted from light curves, providing modelling is done self-consistently, i.e. using a photodynamical model, which simulates the individual photometric observations instead of the more generally used transit times. We apply this methodology to the Kepler-138 planetary system. The derived planetary bulk densities are a factor of two more precise than previous determinations, and we find a discrepancy in the stellar bulk density with respect to a previous study. This leads, in turn, to a discrepancy in the determination of masses and radii of the star and the planets. In particular, we find that interior planet, Kepler-138 b, has a size in between Mars and the Earth. Given our mass and density estimates, we characterize the planetary interiors using a generalized Bayesian inference model. This model allows us to quantify for interior degeneracy and calculate confidence regions of interior parameters such as thicknesses of the core, the mantle, and ocean and gas layers. We find that Kepler-138 b and Kepler-138 d have significantly thick volatile layers, and that the gas layer of Kepler-138 b is likely enriched. On the other hand, Kepler-138 c can be purely rocky.

  3. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values.

    Science.gov (United States)

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were generated. Based on the consensus of the two observers, 15.6% of sites were of low bone density, 47.9% were of intermediate density, and 36.5% were of high density. Receiver-operating characteristic analysis showed that CBCT intensity values had a high predictive power for predicting high density sites (area under the curve [AUC] =0.94, P < 0.005) and intermediate density sites (AUC = 0.81, P < 0.005). The best cut-off value for intensity to predict intermediate density sites was 218 (sensitivity = 0.77 and specificity = 0.76) and the best cut-off value for intensity to predict high density sites was 403 (sensitivity = 0.93 and specificity = 0.77). CBCT intensity values are considered useful for predicting bone density at posterior mandibular implant sites.

  4. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  5. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  6. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  7. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  8. A local leaky-box model for the local stellar surface density-gas surface density-gas phase metallicity relation

    Science.gov (United States)

    Zhu, Guangtun Ben; Barrera-Ballesteros, Jorge K.; Heckman, Timothy M.; Zakamska, Nadia L.; Sánchez, Sebastian F.; Yan, Renbin; Brinkmann, Jonathan

    2017-07-01

    We revisit the relation between the stellar surface density, the gas surface density and the gas-phase metallicity of typical disc galaxies in the local Universe with the SDSS-IV/MaNGA survey, using the star formation rate surface density as an indicator for the gas surface density. We show that these three local parameters form a tight relationship, confirming previous works (e.g. by the PINGS and CALIFA surveys), but with a larger sample. We present a new local leaky-box model, assuming star-formation history and chemical evolution is localized except for outflowing materials. We derive closed-form solutions for the evolution of stellar surface density, gas surface density and gas-phase metallicity, and show that these parameters form a tight relation independent of initial gas density and time. We show that, with canonical values of model parameters, this predicted relation match the observed one well. In addition, we briefly describe a pathway to improving the current semi-analytic models of galaxy formation by incorporating the local leaky-box model in the cosmological context, which can potentially explain simultaneously multiple properties of Milky Way-type disc galaxies, such as the size growth and the global stellar mass-gas metallicity relation.

  9. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  10. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  11. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  12. A kinetic approach to modeling the manufacture of high density strucutral foam: Foaming and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Mondy, Lisa Ann [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Noble, David R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Brunini, Victor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Roberts, Christine Cardinal [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Long, Kevin Nicholas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Soehnel, Melissa Marie [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Celina, Mathias C. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Wyatt, Nicholas B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Thompson, Kyle R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Tinsley, James

    2015-09-01

    We are studying PMDI polyurethane with a fast catalyst, such that filling and polymerization occur simultaneously. The foam is over-packed to tw ice or more of its free rise density to reach the density of interest. Our approach is to co mbine model development closely with experiments to discover new physics, to parameterize models and to validate the models once they have been developed. The model must be able to repres ent the expansion, filling, curing, and final foam properties. PMDI is chemically blown foam, wh ere carbon dioxide is pr oduced via the reaction of water and isocyanate. The isocyanate also re acts with polyol in a competing reaction, which produces the polymer. A new kinetic model is developed and implemented, which follows a simplified mathematical formalism that decouple s these two reactions. The model predicts the polymerization reaction via condensation chemis try, where vitrification and glass transition temperature evolution must be included to correctly predict this quantity. The foam gas generation kinetics are determined by tracking the molar concentration of both water and carbon dioxide. Understanding the therma l history and loads on the foam due to exothermicity and oven heating is very important to the results, since the kinetics and ma terial properties are all very sensitive to temperature. The conservation eq uations, including the e quations of motion, an energy balance, and thr ee rate equations are solved via a stabilized finite element method. We assume generalized-Newtonian rheology that is dependent on the cure, gas fraction, and temperature. The conservation equations are comb ined with a level set method to determine the location of the free surface over time. Results from the model are compared to experimental flow visualization data and post-te st CT data for the density. Seve ral geometries are investigated including a mock encapsulation part, two configur ations of a mock stru ctural part, and a bar geometry to

  13. Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation

    Science.gov (United States)

    Lambon Ralph, Matthew A.; Kempkes, Marleen; Cousins, James N.; Lewis, Penelope A.

    2016-01-01

    Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. SIGNIFICANCE STATEMENT Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. PMID:27030764

  14. Assessment of adsorbate density models for numerical simulations of zeolite-based heat storage applications

    International Nuclear Information System (INIS)

    Lehmann, Christoph; Beckert, Steffen; Gläser, Roger; Kolditz, Olaf; Nagel, Thomas

    2017-01-01

    Highlights: • Characteristic curves fit for binderless Zeolite 13XBFK. • Detailed comparison of adsorbate density models for Dubinin’s adsorption theory. • Predicted heat storage densities robust against choice of density model. • Use of simple linear density models sufficient. - Abstract: The study of water sorption in microporous materials is of increasing interest, particularly in the context of heat storage applications. The potential-theory of micropore volume filling pioneered by Polanyi and Dubinin is a useful tool for the description of adsorption equilibria. Based on one single characteristic curve, the system can be extensively characterised in terms of isotherms, isobars, isosteres, enthalpies etc. However, the mathematical description of the adsorbate density’s temperature dependence has a significant impact especially on the estimation of the energetically relevant adsorption enthalpies. Here, we evaluate and compare different models existing in the literature and elucidate those leading to realistic predictions of adsorption enthalpies. This is an important prerequisite for accurate simulations of heat and mass transport ranging from the laboratory scale to the reactor level of the heat store.

  15. Fire spread in chaparral – a comparison of laboratory data and model predictions in burning live fuels

    Science.gov (United States)

    David R. Weise; Eunmo Koo; Xiangyang Zhou; Shankar Mahalingam; Frédéric Morandini; Jacques-Henri Balbi

    2016-01-01

    Fire behaviour data from 240 laboratory fires in high-density live chaparral fuel beds were compared with model predictions. Logistic regression was used to develop a model to predict fire spread success in the fuel beds and linear regression was used to predict rate of spread. Predictions from the Rothermel equation and three proposed changes as well as two physically...

  16. Modeling of nanoscale liquid mixture transport by density functional hydrodynamics

    Science.gov (United States)

    Dinariev, Oleg Yu.; Evseev, Nikolay V.

    2017-06-01

    Modeling of multiphase compositional hydrodynamics at nanoscale is performed by means of density functional hydrodynamics (DFH). DFH is the method based on density functional theory and continuum mechanics. This method has been developed by the authors over 20 years and used for modeling in various multiphase hydrodynamic applications. In this paper, DFH was further extended to encompass phenomena inherent in liquids at nanoscale. The new DFH extension is based on the introduction of external potentials for chemical components. These potentials are localized in the vicinity of solid surfaces and take account of the van der Waals forces. A set of numerical examples, including disjoining pressure, film precursors, anomalous rheology, liquid in contact with heterogeneous surface, capillary condensation, and forward and reverse osmosis, is presented to demonstrate modeling capabilities.

  17. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  18. Two-component scattering model and the electron density spectrum

    Science.gov (United States)

    Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.

    2010-02-01

    In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.

  19. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  20. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  1. The Indigo Molecule Revisited Again: Assessment of the Minnesota Family of Density Functionals for the Prediction of Its Maximum Absorption Wavelengths in Various Solvents

    Directory of Open Access Journals (Sweden)

    Francisco Cervantes-Navarro

    2013-01-01

    Full Text Available The Minnesota family of density functionals (M05, M05-2X, M06, M06L, M06-2X, and M06-HF were evaluated for the calculation of the UV-Vis spectra of the indigo molecule in solvents of different polarities using time-dependent density functional theory (TD-DFT and the polarized continuum model (PCM. The maximum absorption wavelengths predicted for each functional were compared with the known experimental results.

  2. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  3. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  4. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  5. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    Science.gov (United States)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  6. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  7. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  8. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  9. Thermodynamic modeling of saturated liquid compositions and densities for asymmetric binary systems composed of carbon dioxide, alkanes and alkanols

    International Nuclear Information System (INIS)

    Bayestehparvin, Bita; Nourozieh, Hossein; Kariznovi, Mohammad; Abedi, Jalal

    2015-01-01

    Highlights: • Phase behavior of the binary systems containing largely different components. • Equation of state modeling of binary polar and non-polar systems by utilizing different mixing rules. • Three different mixing rules (one-parameter, two-parameters and Wong–Sandler) coupled with Peng–Robinson equation of state. • Two-parameter mixing rule shows promoting results compared to one-parameter mixing rule. • Wong–Sandler mixing rule is unable to predict saturated liquid densities with sufficient accuracy. - Abstract: The present study mainly focuses on the phase behavior modeling of asymmetric binary mixtures. Capability of different mixing rules and volume shift in the prediction of solubility and saturated liquid density has been investigated. Different binary systems of (alkane + alkanol), (alkane + alkane), (carbon dioxide + alkanol), and (carbon dioxide + alkane) are considered. The composition and the density of saturated liquid phase at equilibrium condition are the properties of interest. Considering composition and saturated liquid density of different binary systems, three main objectives are investigated. First, three different mixing rules (one-parameter, two parameters and Wong–Sandler) coupled with Peng–Robinson equation of state were used to predict the equilibrium properties. The Wong–Sandler mixing rule was utilized with the non-random two-liquid (NRTL) model. Binary interaction coefficients and NRTL model parameters were optimized using the Levenberg–Marquardt algorithm. Second, to improve the density prediction, the volume translation technique was applied. Finally, Two different approaches were considered to tune the equation of state; regression of experimental equilibrium compositions and densities separately and spontaneously. The modeling results show that there is no superior mixing rule which can predict the equilibrium properties for different systems. Two-parameter and Wong–Sandler mixing rule show promoting

  10. Prediction of lung density changes after radiotherapy by cone beam computed tomography response markers and pre-treatment factors for non-small cell lung cancer patients.

    Science.gov (United States)

    Bernchou, Uffe; Hansen, Olfred; Schytte, Tine; Bertelsen, Anders; Hope, Andrew; Moseley, Douglas; Brink, Carsten

    2015-10-01

    This study investigates the ability of pre-treatment factors and response markers extracted from standard cone-beam computed tomography (CBCT) images to predict the lung density changes induced by radiotherapy for non-small cell lung cancer (NSCLC) patients. Density changes in follow-up computed tomography scans were evaluated for 135 NSCLC patients treated with radiotherapy. Early response markers were obtained by analysing changes in lung density in CBCT images acquired during the treatment course. The ability of pre-treatment factors and CBCT markers to predict lung density changes induced by radiotherapy was investigated. Age and CBCT markers extracted at 10th, 20th, and 30th treatment fraction significantly predicted lung density changes in a multivariable analysis, and a set of response models based on these parameters were established. The correlation coefficient for the models was 0.35, 0.35, and 0.39, when based on the markers obtained at the 10th, 20th, and 30th fraction, respectively. The study indicates that younger patients without lung tissue reactions early into their treatment course may have minimal radiation induced lung density increase at follow-up. Further investigations are needed to examine the ability of the models to identify patients with low risk of symptomatic toxicity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Using Apparent Density of Paper from Hardwood Kraft Pulps to Predict Sheet Properties, based on Unsupervised Classification and Multivariable Regression Techniques

    Directory of Open Access Journals (Sweden)

    Ofélia Anjos

    2015-07-01

    Full Text Available Paper properties determine the product application potential and depend on the raw material, pulping conditions, and pulp refining. The aim of this study was to construct mathematical models that predict quantitative relations between the paper density and various mechanical and optical properties of the paper. A dataset of properties of paper handsheets produced with pulps of Acacia dealbata, Acacia melanoxylon, and Eucalyptus globulus beaten at 500, 2500, and 4500 revolutions was used. Unsupervised classification techniques were combined to assess the need to perform separated prediction models for each species, and multivariable regression techniques were used to establish such prediction models. It was possible to develop models with a high goodness of fit using paper density as the independent variable (or predictor for all variables except tear index and zero-span tensile strength, both dry and wet.

  12. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  13. A note on the conditional density estimate in single functional index model

    OpenAIRE

    2010-01-01

    Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...

  14. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  15. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2011-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  16. Prediction of melanoma metastasis by the Shields index based on lymphatic vessel density

    Directory of Open Access Journals (Sweden)

    Metcalfe Chris

    2010-05-01

    Full Text Available Abstract Background Melanoma usually presents as an initial skin lesion without evidence of metastasis. A significant proportion of patients develop subsequent local, regional or distant metastasis, sometimes many years after the initial lesion was removed. The current most effective staging method to identify early regional metastasis is sentinel lymph node biopsy (SLNB, which is invasive, not without morbidity and, while improving staging, may not improve overall survival. Lymphatic density, Breslow's thickness and the presence or absence of lymphatic invasion combined has been proposed to be a prognostic index of metastasis, by Shields et al in a patient group. Methods Here we undertook a retrospective analysis of 102 malignant melanomas from patients with more than five years follow-up to evaluate the Shields' index and compare with existing indicators. Results The Shields' index accurately predicted outcome in 90% of patients with metastases and 84% without metastases. For these, the Shields index was more predictive than thickness or lymphatic density. Alternate lymphatic measurement (hot spot analysis was also effective when combined into the Shields index in a cohort of 24 patients. Conclusions These results show the Shields index, a non-invasive analysis based on immunohistochemistry of lymphatics surrounding primary lesions that can accurately predict outcome, is a simple, useful prognostic tool in malignant melanoma.

  17. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  18. Describing a Strongly Correlated Model System with Density Functional Theory.

    Science.gov (United States)

    Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth

    2017-07-06

    The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.

  19. Improving density functional tight binding predictions of free energy surfaces for peptide condensation reactions in solution

    Science.gov (United States)

    Kroonblawd, Matthew; Goldman, Nir

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for chemistry that is fast relative to DFT simulation times (Contract DE-AC52-07NA27344.

  20. Improving Density Functional Tight Binding Predictions of Free Energy Surfaces for Slow Chemical Reactions in Solution

    Science.gov (United States)

    Kroonblawd, Matthew; Goldman, Nir

    2017-06-01

    First principles molecular dynamics using highly accurate density functional theory (DFT) is a common tool for predicting chemistry, but the accessible time and space scales are often orders of magnitude beyond the resolution of experiments. Semi-empirical methods such as density functional tight binding (DFTB) offer up to a thousand-fold reduction in required CPU hours and can approach experimental scales. However, standard DFTB parameter sets lack good transferability and calibration for a particular system is usually necessary. Force matching the pairwise repulsive energy term in DFTB to short DFT trajectories can improve the former's accuracy for reactions that are fast relative to DFT simulation times (Contract DE-AC52-07NA27344.

  1. Predictive modeling of pedestal structure in KSTAR using EPED model

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hyunsun; Kim, J. Y. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Kwon, Ohjin [Department of Physics, Daegu University, Gyeongbuk 712-714 (Korea, Republic of)

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  2. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    2014-08-01

    solar radio F10.7 proxy and magnetic activity measurements are used to calculate the baseline orbit. This approach is applied to compare the daily... approach is to calculate along-track errors for these models and compare them against the baseline error based on the “ground truth” neutral density data...n,m = Degree and order, respectively ′ = Geocentric latitude Approved for public release; distribution is unlimited. 2 λ = Geocentric

  3. Modelling high density phenomena in hydrogen fibre Z-pinches

    International Nuclear Information System (INIS)

    Chittenden, J.P.

    1990-09-01

    The application of hydrogen fibre Z-pinches to the study of the radiative collapse phenomenon is studied computationally. Two areas of difficulty, the formation of a fully ionized pinch from a cryogenic fibre and the processes leading to collapse termination, are addressed in detail. A zero-D model based on the energy equation highlights the importance of particle end losses and changes in the Coulomb logarithm upon collapse initiation and termination. A 1-D Lagrangian resistive MHD code shows the importance of the changing radial profile shapes, particularly in delaying collapse termination. A 1-D, three fluid MHD code is developed to model the ionization of the fibre by thermal conduction from a high temperature surface corona to the cold core. Rate equations for collisional ionization, 3-body recombination and equilibration are solved in tandem with fluid equations for the electrons, ions and neutrals. Continuum lowering is found to assist ionization at the corona-core interface. The high density plasma phenomena responsible for radiative collapse termination are identified as the self-trapping of radiation and free electron degeneracy. A radiation transport model and computational analogues for the effects of degeneracy upon the equation of state, transport coefficients and opacity are implemented in the 1-D, single fluid model. As opacity increases the emergent spectrum is observed to become increasingly Planckian and a fall off in radiative cooling at small radii and low frequencies occurs giving rise to collapse termination. Electron degeneracy terminates radiative collapse by supplementing the radial pressure gradient until the electromagnetic pinch force is balanced. Collapse termination is found to be a hybrid process of opacity and degeneracy effects across a wide range of line densities with opacity dominant at large line densities but with electron degeneracy becoming increasingly important at lower line densities. (author)

  4. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  5. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. A phenomenological constitutive model for low density polyurethane foams

    International Nuclear Information System (INIS)

    Neilsen, M.K.; Morgan, H.S.; Krieg, R.D.

    1987-04-01

    Results from a series of hydrostatic and triaxial compression tests which were performed on polyurethane foams are presented in this report. These tests indicate that the volumetric and deviatoric parts of the foam behavior are strongly coupled. This coupling behavior could not be captured with any of several commonly used plasticity models. Thus, a new constitutive model was developed. This new model was based on a decomposition of the foam response into two parts: (1) response of the polymer skeleton, and (2) response of the air inside the cells. The air contribution was completely volumetric. The new constitutive model was implemented in two finite element codes, SANCHO and PRONTO. Results from a series of analyses completed with these codes indicated that the new constitutive model captured all of the foam behaviors that had been observed in the experiments. Finally, a typical dynamic problem was analyzed using the new constitutive model and other constitutive models to demonstrate differences between the models. Results from this series of analyses indicated that the new constitutive model generated displacement and acceleration predictions that were between predictions obtained using the other models. This result was expected. 9 refs., 45 figs., 4 tabs

  7. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  8. Droplet and bubble nucleation modeled by density gradient theory – cubic equation of state versus saft model

    Directory of Open Access Journals (Sweden)

    Hrubý Jan

    2012-04-01

    Full Text Available The study presents some preliminary results of the density gradient theory (GT combined with two different equations of state (EoS: the classical cubic equation by van der Waals and a recent approach based on the statistical associating fluid theory (SAFT, namely its perturbed-chain (PC modification. The results showed that the cubic EoS predicted for a given surface tension the density profile with a noticeable defect. Bulk densities predicted by the cubic EoS differed as much as by 100 % from the reference data. On the other hand, the PC-SAFT EoS provided accurate results for density profile and both bulk densities in the large range of temperatures. It has been shown that PC-SAFT is a promising tool for accurate modeling of nucleation using the GT. Besides the basic case of a planar phase interface, the spherical interface was analyzed to model a critical cluster occurring either for nucleation of droplets (condensation or bubbles (boiling, cavitation. However, the general solution for the spherical interface will require some more attention due to its numerical difficulty.

  9. Empirical model for the electron density peak height disturbance in response to solar wind conditions

    Science.gov (United States)

    Blanch, E.; Altadill, D.

    2009-04-01

    Geomagnetic storms disturb the quiet behaviour of the ionosphere, its electron density and the electron density peak height, hmF2. Many works have been done to predict the variations of the electron density but few efforts have been dedicated to predict the variations the hmF2 under disturbed helio-geomagnetic conditions. We present the results of the analyses of the F2 layer peak height disturbances occurred during intense geomagnetic storms for one solar cycle. The results systematically show a significant peak height increase about 2 hours after the beginning of the main phase of the geomagnetic storm, independently of both the local time position of the station at the onset of the storm and the intensity of the storm. An additional uplift is observed in the post sunset sector. The duration of the uplift and the height increase are dependent of the intensity of the geomagnetic storm, the season and the local time position of the station at the onset of the storm. An empirical model has been developed to predict the electron density peak height disturbances in response to solar wind conditions and local time which can be used for nowcasting and forecasting the hmF2 disturbances for the middle latitude ionosphere. This being an important output for EURIPOS project operational purposes.

  10. Ab initio derivation of model energy density functionals

    International Nuclear Information System (INIS)

    Dobaczewski, Jacek

    2016-01-01

    I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)

  11. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  12. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  13. Protein single-model quality assessment by feature-based probability density functions.

    Science.gov (United States)

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  14. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  15. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  16. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  17. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  18. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-Rammler particle size distribution (PSD) function. The Ks and PSD data of 14 golf course sands from literature as well as newly measured data for a size fraction of Lunar Regolith Simulant, packed at three different dry bulk densities, were used for model evaluation. The pore network tortuosity......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  19. K-correlation power spectral density and surface scatter model

    Science.gov (United States)

    Dittman, Michael G.

    2006-08-01

    The K-Correlation or ABC model for surface power spectral density (PSD) and BRDF has been around for years. Eugene Church and John Stover, in particular, have published descriptions of its use in describing smooth surfaces. The model has, however, remained underused in the optical analysis community partially due to the lack of a clear summary tailored toward that application. This paper provides the K-Correlation PSD normalized to σ(λ) and BRDF normalized to TIS(σ,λ) in a format intended to be used by stray light analysts. It is hoped that this paper will promote use of the model by analysts and its incorporation as a standard tool into stray light modeling software.

  20. Modeling a nucleon system: static and dynamical properties - density fluctuations

    International Nuclear Information System (INIS)

    Idier, D.

    1997-01-01

    This thesis sets forth a quasi-particle model for the static and dynamical properties of nuclear matter. This model is based on a scale ratio of quasi-particle to nucleons and the projection of the semi-classical distribution on a coherent Gaussian state basis. The first chapter is dealing with the transport equations, particularly with the Vlasov equation for Wigner distribution function. The second one is devoted to the statics of nuclear matter. Here, the sampling effect upon the nuclear density is treated and the state equation of the Gaussian fluid is compared with that given by Hartree-Fock approximation. We define state equation as the relationship between the nucleon binding energy and density, for a given temperature. The curvature around the state equation minimum of the quasi-particle system is shown to be related to the speed of propagation of density perturbation. The volume energy and the surface properties of a (semi-)infinite nucleon system are derived. For the resultant saturated auto-coherent semi-infinite system of quasi-particles the surface coefficient appearing in the mass formula is extracted as well as the system density profile. The third chapter treats the dynamics of the two-particle residual interactions. The effect of different parameters on relaxation of a nucleon system without a mean field is studied by means of a Eulerian and Lagrangian modeling. The fourth chapter treats the volume instabilities (spinodal decomposition) in nuclear matter. The quasi-particle systems, initially prepared in the spinodal region of the utilized interaction, are set to evolve. It is shown then that the scale ratio acts upon the amount of fluctuations injected in the system. The inhomogeneity degree and a proper time are defined and the role of collisions in the spinodal decomposition as well as that of the initial temperature and density, are investigated. Assuming different effective macroscopic interactions, the influence of quantities as

  1. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  2. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  3. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  4. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  5. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  6. Dynamic density functional theory of solid tumor growth: Preliminary models

    Directory of Open Access Journals (Sweden)

    Arnaud Chauviere

    2012-03-01

    Full Text Available Cancer is a disease that can be seen as a complex system whose dynamics and growth result from nonlinear processes coupled across wide ranges of spatio-temporal scales. The current mathematical modeling literature addresses issues at various scales but the development of theoretical methodologies capable of bridging gaps across scales needs further study. We present a new theoretical framework based on Dynamic Density Functional Theory (DDFT extended, for the first time, to the dynamics of living tissues by accounting for cell density correlations, different cell types, phenotypes and cell birth/death processes, in order to provide a biophysically consistent description of processes across the scales. We present an application of this approach to tumor growth.

  7. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  8. From Real Materials to Model Hamiltonians With Density Matrix Downfolding

    Directory of Open Access Journals (Sweden)

    Huihuo Zheng

    2018-05-01

    Full Text Available Due to advances in computer hardware and new algorithms, it is now possible to perform highly accurate many-body simulations of realistic materials with all their intrinsic complications. The success of these simulations leaves us with a conundrum: how do we extract useful physical models and insight from these simulations? In this article, we present a formal theory of downfolding–extracting an effective Hamiltonian from first-principles calculations. The theory maps the downfolding problem into fitting information derived from wave functions sampled from a low-energy subspace of the full Hilbert space. Since this fitting process most commonly uses reduced density matrices, we term it density matrix downfolding (DMD.

  9. Bone mineral density before and after OLT: long-term follow-up and predictive factors.

    Science.gov (United States)

    Guichelaar, Maureen M J; Kendall, Rebecca; Malinchoc, Michael; Hay, J Eileen

    2006-09-01

    Fracturing after liver transplantation (OLT) occurs due to the combination of preexisting low bone mineral density (BMD) and early posttransplant bone loss, the risk factors for which are poorly defined. The prevalence and predictive factors for hepatic osteopenia and osteoporosis, posttransplant bone loss, and subsequent bone gain were studied by the long-term posttransplant follow-up of 360 consecutive adult patients with end-stage primary biliary cirrhosis (PBC) and primary sclerosing cholangitis (PSC). Only 20% of patients with advanced PBC or PSC have normal bone mass. Risk factors for low spinal BMD are low body mass index, older age, postmenopausal status, muscle wasting, high alkaline phosphatase and low serum albumin. A high rate of spinal bone loss occurred in the first 4 posttransplant months (annual rate of 16%) especially in those with younger age, PSC, higher pretransplant bone density, no inflammatory bowel disease, shorter duration of liver disease, current smoking, and ongoing cholestasis at 4 months. Factors favoring spinal bone gain from 4 to 24 months after transplantation were lower baseline and/or 4-month bone density, premenopausal status, lower cumulative glucocorticoids, no ongoing cholestasis, and higher levels of vitamin D and parathyroid hormone. Bone mass therefore improves most in patients with lowest pretransplant BMD who undergo successful transplantation with normal hepatic function and improved gonadal and nutritional status. Patients transplanted most recently have improved bone mass before OLT, and although bone loss still occurs early after OLT, these patients also have a greater recovery in BMD over the years following OLT.

  10. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xiaohua; Zhang, Xiaoli; Wang, Xianlong [Key Laboratory of Materials Physics, Institute of Solid State Physics, Chinese Academy of Sciences, Hefei 230031 (China); Zeng, Zhi, E-mail: zzeng@theory.issp.ac.cn [Key Laboratory of Materials Physics, Institute of Solid State Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China)

    2016-04-15

    The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene) has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs) based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW) caused by electron-electron interaction and charge density wave (CDW) caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1) comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2) comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3) SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  11. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Directory of Open Access Journals (Sweden)

    Xiaohua Wu

    2016-04-01

    Full Text Available The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW caused by electron-electron interaction and charge density wave (CDW caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1 comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2 comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3 SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  12. Ensemble Assimilation Using Three First-Principles Thermospheric Models as a Tool for 72-hour Density and Satellite Drag Forecasts

    Science.gov (United States)

    Hunton, D.; Pilinski, M.; Crowley, G.; Azeem, I.; Fuller-Rowell, T. J.; Matsuo, T.; Fedrizzi, M.; Solomon, S. C.; Qian, L.; Thayer, J. P.; Codrescu, M.

    2014-12-01

    Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by variability in the density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. As the population of satellites in Earth orbit grows, higher space-weather prediction accuracy is required for critical missions, such as accurate catalog maintenance, collision avoidance for manned and unmanned space flight, reentry prediction, satellite lifetime prediction, defining on-board fuel requirements, and satellite attitude dynamics. We describe ongoing work to build a comprehensive nowcast and forecast system for neutral density, winds, temperature, composition, and satellite drag. This modeling tool will be called the Atmospheric Density Assimilation Model (ADAM). It will be based on three state-of-the-art coupled models of the thermosphere-ionosphere running in real-time, using assimilative techniques to produce a thermospheric nowcast. It will also produce, in realtime, 72-hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition. We will review the requirements for the ADAM system, the underlying full-physics models, the plethora of input options available to drive the models, a feasibility study showing the performance of first-principles models as it pertains to satellite-drag operational needs, and review challenges in designing an assimilative space-weather prediction model. The performance of the ensemble assimilative model is expected to exceed the performance of current empirical and assimilative density models.

  13. A stepwise model to predict monthly streamflow

    Science.gov (United States)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  14. Using Tree Detection Algorithms to Predict Stand Sapwood Area, Basal Area and Stocking Density in Eucalyptus regnans Forest

    Directory of Open Access Journals (Sweden)

    Dominik Jaskierniak

    2015-06-01

    Full Text Available Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR data with as few as 0.9 pulses m−2, we applied a local maximum filtering (LMF method and normalised cut (NCut algorithm to predict stocking density (SDen of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa per hectare and sapwood area (SAHa per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37 to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56 was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50 or a sapwood perimeter index, defined as (BAHaSDen½ (R2 = 0.48.

  15. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  16. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  17. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    Science.gov (United States)

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (conservation actions. PMID:29579129

  18. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  19. Intratumor microvessel density in biopsy specimens predicts local response of hypopharyngeal cancer to radiotherapy

    International Nuclear Information System (INIS)

    Zhang, Shi-Chuan; Miyamoto, Shin-ichi; Hasebe, Takahiro; Ishii, Genichiro; Ochiai, Atsushi; Kamijo, Tomoyuki; Hayashi, Ryuichi; Fukayama, Masashi

    2003-01-01

    The aim of this retrospective study was to identify reliable predictive factors for local control of hypopharyngeal cancer (HPC) treated by radiotherapy. A cohort of 38 patients with HPC treated by radical radiotherapy at the National Cancer Center Hospital East between 1992 and 1999 were selected as subjects for the present study. Paraffin-embedded pre-therapy biopsy specimens from these patients were used for immunostaining to evaluate the relationships between local tumor control and expression of the following previously reported predictive factors for local recurrence of head and neck cancer treated by radiotherapy: Ki-67, Cyclin D1, CDC25B, VEGF, p53, Bax and Bcl-2. The predictive power of microvessel density (MVD) in biopsy specimens and of clinicopathologic factors (age, gender and clinical tumor-node-metastasis stage) was also statistically analyzed. Twenty-five patients developed tumor recurrence at the primary site. Univariate analysis indicated better local control of tumors with high microvessel density [MVD≥median (39 vessels/field)] than with low MVD (< median, P=0.042). There were no significant associations between local control and expression of Ki-67 (P=0.467), Bcl-2 (P=0.127), Bax (P=0.242), p53 (P=0.262), Cyclin D1 (P=0.245), CDC25B (P=0.511) or VEGF (P=0.496). Clinicopathologic factors were also demonstrated to have no significant influence on local control (age, P=0.974; gender, P=0.372; T factor, P=0.602; N factor, P=0.530; Stage, P=0.499). MVD in biopsy specimens was closely correlated with local control of HPC treated by radiotherapy. (author)

  20. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  1. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  2. Predictive Models, How good are they?

    DEFF Research Database (Denmark)

    Kasch, Helge

    The WAD grading system has been used for more than 20 years by now. It has shown long-term viability, but with strengths and limitations. New bio-psychosocial assessment of the acute whiplash injured subject may provide better prediction of long-term disability and pain. Furthermore, the emerging......-up. It is important to obtain prospective identification of the relevant risk underreported disability could, if we were able to expose these hidden “risk-factors” during our consultations, provide us with better predictive models. New data from large clinical studies will present exciting new genetic risk markers...

  3. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    SILVA R. G.

    1999-01-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  4. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  5. Calibration models for density borehole logging - construction report

    International Nuclear Information System (INIS)

    Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.

    1995-10-01

    Two machined blocks of magnesium and aluminum alloys form the basis for Hanford's density models. The blocks provide known densities of 1.780 ± 0.002 g/cm 3 and 2.804 ± 0.002 g/cm 3 for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing

  6. Neutron density optimal control of A-1 reactor analoque model

    International Nuclear Information System (INIS)

    Grof, V.

    1975-01-01

    Two applications are described of the optimal control of a reactor analog model. Both cases consider the control of neutron density. Control loops containing the on-line controlled process, the reactor of the first Czechoslovak nuclear power plant A-1, are simulated on an analog computer. Two versions of the optimal control algorithm are derived using modern control theory (Pontryagin's maximum principle, the calculus of variations, and Kalman's estimation theory), the minimum time performance index, and the quadratic performance index. The results of the optimal control analysis are compared with the A-1 reactor conventional control. (author)

  7. Spin-density functional for exchange anisotropic Heisenberg model

    International Nuclear Information System (INIS)

    Prata, G.N.; Penteado, P.H.; Souza, F.C.; Libero, Valter L.

    2009-01-01

    Ground-state energies for antiferromagnetic Heisenberg models with exchange anisotropy are estimated by means of a local-spin approximation made in the context of the density functional theory. Correlation energy is obtained using the non-linear spin-wave theory for homogeneous systems from which the spin functional is built. Although applicable to chains of any size, the results are shown for small number of sites, to exhibit finite-size effects and allow comparison with exact-numerical data from direct diagonalization of small chains.

  8. Early experiences building a software quality prediction model

    Science.gov (United States)

    Agresti, W. W.; Evanco, W. M.; Smith, M. C.

    1990-01-01

    Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.

  9. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  10. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  11. Control-oriented modeling of the plasma particle density in tokamaks and application to real-time density profile reconstruction

    NARCIS (Netherlands)

    Blanken, T.C.; Felici, F.; Rapson, C.J.; de Baar, M.R.; Heemels, W.P.M.H.

    2018-01-01

    A model-based approach to real-time reconstruction of the particle density profile in tokamak plasmas is presented, based on a dynamic state estimator. Traditionally, the density profile is reconstructed in real-time by solving an ill-conditioned inversion problem using a measurement at a single

  12. A Bayesian antedependence model for whole genome prediction.

    Science.gov (United States)

    Yang, Wenzhao; Tempelman, Robert J

    2012-04-01

    Hierarchical mixed effects models have been demonstrated to be powerful for predicting genomic merit of livestock and plants, on the basis of high-density single-nucleotide polymorphism (SNP) marker panels, and their use is being increasingly advocated for genomic predictions in human health. Two particularly popular approaches, labeled BayesA and BayesB, are based on specifying all SNP-associated effects to be independent of each other. BayesB extends BayesA by allowing a large proportion of SNP markers to be associated with null effects. We further extend these two models to specify SNP effects as being spatially correlated due to the chromosomally proximal effects of causal variants. These two models, that we respectively dub as ante-BayesA and ante-BayesB, are based on a first-order nonstationary antedependence specification between SNP effects. In a simulation study involving 20 replicate data sets, each analyzed at six different SNP marker densities with average LD levels ranging from r(2) = 0.15 to 0.31, the antedependence methods had significantly (P 0. 24) with differences exceeding 3%. A cross-validation study was also conducted on the heterogeneous stock mice data resource (http://mus.well.ox.ac.uk/mouse/HS/) using 6-week body weights as the phenotype. The antedependence methods increased cross-validation prediction accuracies by up to 3.6% compared to their classical counterparts (P benchmark data sets and demonstrated that the antedependence methods were more accurate than their classical counterparts for genomic predictions, even for individuals several generations beyond the training data.

  13. Models of asthma: density-equalizing mapping and output benchmarking

    Directory of Open Access Journals (Sweden)

    Fischer Tanja C

    2008-02-01

    Full Text Available Abstract Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis.

  14. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  15. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    Energy Technology Data Exchange (ETDEWEB)

    Halligan, Matthew

    2017-11-01

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities are derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.

  16. Predicting available water of soil from particle-size distribution and bulk density in an oasis-desert transect in northwestern China

    Science.gov (United States)

    Li, Danfeng; Gao, Guangyao; Shao, Ming'an; Fu, Bojie

    2016-07-01

    A detailed understanding of soil hydraulic properties, particularly the available water content of soil, (AW, cm3 cm-3), is required for optimal water management. Direct measurement of soil hydraulic properties is impractical for large scale application, but routinely available soil particle-size distribution (PSD) and bulk density can be used as proxies to develop various prediction functions. In this study, we compared the performance of the Arya and Paris (AP) model, Mohammadi and Vanclooster (MV) model, Arya and Heitman (AH) model, and Rosetta program in predicting the soil water characteristic curve (SWCC) at 34 points with experimental SWCC data in an oasis-desert transect (20 × 5 km) in the middle reaches of the Heihe River basin, northwestern China. The idea of the three models emerges from the similarity of the shapes of the PSD and SWCC. The AP model, MV model, and Rosetta program performed better in predicting the SWCC than the AH model. The AW determined from the SWCCs predicted by the MV model agreed better with the experimental values than those derived from the AP model and Rosetta program. The fine-textured soils were characterized by higher AW values, while the sandy soils had lower AW values. The MV model has the advantages of having robust physical basis, being independent of database-related parameters, and involving subclasses of texture data. These features make it promising in predicting soil water retention at regional scales, serving for the application of hydrological models and the optimization of soil water management.

  17. Anopheles atroparvus density modeling using MODIS NDVI in a former malarious area in Portugal.

    Science.gov (United States)

    Lourenço, Pedro M; Sousa, Carla A; Seixas, Júlia; Lopes, Pedro; Novo, Maria T; Almeida, A Paulo G

    2011-12-01

    Malaria is dependent on environmental factors and considered as potentially re-emerging in temperate regions. Remote sensing data have been used successfully for monitoring environmental conditions that influence the patterns of such arthropod vector-borne diseases. Anopheles atroparvus density data were collected from 2002 to 2005, on a bimonthly basis, at three sites in a former malarial area in Southern Portugal. The development of the Remote Vector Model (RVM) was based upon two main variables: temperature and the Normalized Differential Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite. Temperature influences the mosquito life cycle and affects its intra-annual prevalence, and MODIS NDVI was used as a proxy for suitable habitat conditions. Mosquito data were used for calibration and validation of the model. For areas with high mosquito density, the model validation demonstrated a Pearson correlation of 0.68 (pNDVI. RVM is a satellite data-based assimilation algorithm that uses temperature fields to predict the intra- and inter-annual densities of this mosquito species using MODIS NDVI. RVM is a relevant tool for vector density estimation, contributing to the risk assessment of transmission of mosquito-borne diseases and can be part of the early warning system and contingency plans providing support to the decision making process of relevant authorities. © 2011 The Society for Vector Ecology.

  18. Validated predictive modelling of the environmental resistome.

    Science.gov (United States)

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  19. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  20. Baryogenesis model predicting antimatter in the Universe

    International Nuclear Information System (INIS)

    Kirilova, D.

    2003-01-01

    Cosmic ray and gamma-ray data do not rule out antimatter domains in the Universe, separated at distances bigger than 10 Mpc from us. Hence, it is interesting to analyze the possible generation of vast antimatter structures during the early Universe evolution. We discuss a SUSY-condensate baryogenesis model, predicting large separated regions of matter and antimatter. The model provides generation of the small locally observed baryon asymmetry for a natural initial conditions, it predicts vast antimatter domains, separated from the matter ones by baryonically empty voids. The characteristic scale of antimatter regions and their distance from the matter ones is in accordance with observational constraints from cosmic ray, gamma-ray and cosmic microwave background anisotropy data

  1. Automated structure solution, density modification and model building.

    Science.gov (United States)

    Terwilliger, Thomas C

    2002-11-01

    The approaches that form the basis of automated structure solution in SOLVE and RESOLVE are described. The use of a scoring scheme to convert decision making in macromolecular structure solution to an optimization problem has proven very useful and in many cases a single clear heavy-atom solution can be obtained and used for phasing. Statistical density modification is well suited to an automated approach to structure solution because the method is relatively insensitive to choices of numbers of cycles and solvent content. The detection of non-crystallographic symmetry (NCS) in heavy-atom sites and checking of potential NCS operations against the electron-density map has proven to be a reliable method for identification of NCS in most cases. Automated model building beginning with an FFT-based search for helices and sheets has been successful in automated model building for maps with resolutions as low as 3 A. The entire process can be carried out in a fully automatic fashion in many cases.

  2. Probabilistic predictive modelling of carbon nanocomposites for medical implants design.

    Science.gov (United States)

    Chua, Matthew; Chui, Chee-Kong

    2015-04-01

    Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Predicting the stability of ternary intermetallics with density functional theory and machine learning

    Science.gov (United States)

    Schmidt, Jonathan; Chen, Liming; Botti, Silvana; Marques, Miguel A. L.

    2018-06-01

    We use a combination of machine learning techniques and high-throughput density-functional theory calculations to explore ternary compounds with the AB2C2 composition. We chose the two most common intermetallic prototypes for this composition, namely, the tI10-CeAl2Ga2 and the tP10-FeMo2B2 structures. Our results suggest that there may be ˜10 times more stable compounds in these phases than previously known. These are mostly metallic and non-magnetic. While the use of machine learning reduces the overall calculation cost by around 75%, some limitations of its predictive power still exist, in particular, for compounds involving the second-row of the periodic table or magnetic elements.

  4. The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents

    Directory of Open Access Journals (Sweden)

    Ziad Salem

    2014-12-01

    Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.

  5. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    OpenAIRE

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre t...

  6. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  7. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  8. Breast cancer risks and risk prediction models.

    Science.gov (United States)

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  9. Using broad landscape level features to predict redd densities of steelhead trout (Oncorhynchus mykiss) and Chinook Salmon (Oncorhynchus tshawytscha) in the Methow River watershed, Washington

    Science.gov (United States)

    Romine, Jason G.; Perry, Russell W.; Connolly, Patrick J.

    2013-01-01

    We used broad-scale landscape feature variables to model redd densities of spring Chinook salmon (Oncorhynchus tshawytscha) and steelhead trout (Oncorhynchus mykiss) in the Methow River watershed. Redd densities were estimated from redd counts conducted from 2005 to 2007 and 2009 for steelhead trout and 2005 to 2009 for spring Chinook salmon. These densities were modeled using generalized linear mixed models. Variables examined included primary and secondary geology type, habitat type, flow type, sinuosity, and slope of stream channel. In addition, we included spring effect and hatchery effect variables to account for high densities of redds near known springs and hatchery outflows. Variables were associated with National Hydrography Database reach designations for modeling redd densities within each reach. Reaches were assigned a dominant habitat type, geology, mean slope, and sinuosity. The best fit model for spring Chinook salmon included sinuosity, critical slope, habitat type, flow type, and hatchery effect. Flow type, slope, and habitat type variables accounted for most of the variation in the data. The best fit model for steelhead trout included year, habitat type, flow type, hatchery effect, and spring effect. The spring effect, flow type, and hatchery effect variables explained most of the variation in the data. Our models illustrate how broad-scale landscape features may be used to predict spawning habitat over large areas where fine-scale data may be lacking.

  10. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  11. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  12. Quantifying confidence in density functional theory predictions of magnetic ground states

    Science.gov (United States)

    Houchins, Gregory; Viswanathan, Venkatasubramanian

    2017-10-01

    Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there

  13. Can stone density on plain radiography predict the outcome of extracorporeal shockwave lithotripsy for ureteral stones?

    Science.gov (United States)

    Lim, Ki Hong; Jung, Jin-Hee; Kwon, Jae Hyun; Lee, Yong Seok; Bae, Jungbum; Cho, Min Chul; Lee, Kwang Soo

    2015-01-01

    Purpose The objective was to determine whether stone density on plain radiography (kidney-ureter-bladder, KUB) could predict the outcome of extracorporeal shockwave lithotripsy (ESWL) for ureteral stones. Materials and Methods A total of 223 patients treated by ESWL for radio-opaque ureteral stones of 5 to 20 mm were included in this retrospective study. All patients underwent routine blood and urine analyses, plain radiography (KUB), and noncontrast computed tomography (NCCT) before ESWL. Demographic, stone, and radiological characteristics on KUB and NCCT were analyzed. The patients were categorized into two groups: lower-density (LD) group (radiodensity less than or equal to that of the 12th rib, n=163) and higher-density (HD) group (radiodensity greater than that of the 12th rib, n=60). Stone-free status was assessed by KUB every week after ESWL. A successful outcome was defined as stone free within 1 month after ESWL. Results Mean stone size in the LD group was significantly smaller than that in the HD group (7.5±1.4 mm compared with 9.9±2.9 mm, p=0.002). The overall success rates in the LD and HD groups were 82.1% and 60.0%, respectively (p=0.007). The mean duration of stone-free status and average number of SWL sessions required for success in the two groups were 21.7 compared with 39.2 days and 1.8 compared with 2.3, respectively (pESWL since colic and radiodensity of the stone on KUB were independent predictors of successful ESWL. Conclusions Our data suggest that larger stone size, longer time to ESWL, and ureteral stones with a radiodensity greater than that of the 12th rib may be at a relatively higher risk of ESWL failure 1 month after the procedure. PMID:25598937

  14. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Predicting extinction rates in stochastic epidemic models

    International Nuclear Information System (INIS)

    Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra

    2009-01-01

    We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed

  16. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  17. Data Driven Economic Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Masoud Kheradmandi

    2018-04-01

    Full Text Available This manuscript addresses the problem of data driven model based economic model predictive control (MPC design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.

  18. An empirical probability model of detecting species at low densities.

    Science.gov (United States)

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  19. Prostate-specific antigen density is predictive of outcome in suboptimal prostate seed brachytherapy.

    Science.gov (United States)

    Benzaquen, David; Delouya, Guila; Ménard, Cynthia; Barkati, Maroie; Taussky, Daniel

    In prostate seed brachytherapy, a D 90 of prostate-specific antigen + 2). Univariate and multivariate analyses were performed, adjusting for known prognostic factors such as D 90 and prostate-specific antigen density (PSAD) of ≥0.15 ng/mL/cm 3 , to evaluate their ability to predict BF. Median followup for patients without BF was 72 months (interquartile range 56-96). BF-free recurrence rate at 5 years was 95% and at 8 years 88%. In univariate analysis, PSAD and cancer of the prostate risk assessment score were predictive of BF. On multivariate analysis, none of the factors remained significant. The best prognosis had patients with a low PSAD (<0.15 ng/mL/cm 3 ) and an optimal implant at 30 days after implantation (as defined by D 90  ≥ 130 Gy) compared to patients with both factors unfavorable (p = 0.006). A favorable PSAD was associate with a good prognosis, independently of the D 90 (<130 Gy vs. ≥130 Gy, p = 0.7). Patients with a PSAD of <0.15 ng/mL/cm 3 have little risk of BF, even in the case of a suboptimal implant. These results need to be validated in other patients' cohorts. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  20. Toward a predictive model for elastomer seals

    Science.gov (United States)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.

  1. Plant control using embedded predictive models

    International Nuclear Information System (INIS)

    Godbole, S.S.; Gabler, W.E.; Eschbach, S.L.

    1990-01-01

    B and W recently undertook the design of an advanced light water reactor control system. A concept new to nuclear steam system (NSS) control was developed. The concept, which is called the Predictor-Corrector, uses mathematical models of portions of the controlled NSS to calculate, at various levels within the system, demand and control element position signals necessary to satisfy electrical demand. The models give the control system the ability to reduce overcooling and undercooling of the reactor coolant system during transients and upsets. Two types of mathematical models were developed for use in designing and testing the control system. One model was a conventional, comprehensive NSS model that responds to control system outputs and calculates the resultant changes in plant variables that are then used as inputs to the control system. Two other models, embedded in the control system, were less conventional, inverse models. These models accept as inputs plant variables, equipment states, and demand signals and predict plant operating conditions and control element states that will satisfy the demands. This paper reports preliminary results of closed-loop Reactor Coolant (RC) pump trip and normal load reduction testing of the advanced concept. Results of additional transient testing, and of open and closed loop stability analyses will be reported as they are available

  2. MHD Modeling of Conductors at Ultra-High Current Density

    International Nuclear Information System (INIS)

    ROSENTHAL, STEPHEN E.; DESJARLAIS, MICHAEL P.; SPIELMAN, RICK B.; STYGAR, WILLIAM A.; ASAY, JAMES R.; DOUGLAS, M.R.; HALL, C.A.; FRESE, M.H.; MORSE, R.L.; REISMAN, D.B.

    2000-01-01

    In conjunction with ongoing high-current experiments on Sandia National Laboratories' Z accelerator, the authors have revisited a problem first described in detail by Heinz Knoepfel. Unlike the 1-Tesla MITLs of pulsed power accelerators used to produce intense particle beams, Z's disc transmission line (downstream of the current addition) is in a 100--1,200 Tesla regime, so its conductors cannot be modeled simply as static infinite conductivity boundaries. Using the MHD code MACH2 they have been investigating the conductor hydrodynamics, characterizing the joule heating, magnetic field diffusion, and material deformation, pressure, and velocity over a range of current densities, current rise-times, and conductor materials. Three purposes of this work are (1) to quantify power flow losses owing to ultra-high magnetic fields, (2) to model the response of VISAR diagnostic samples in various configurations on Z, and (3) to incorporate the most appropriate equation of state and conductivity models into the MHD computations. Certain features are strongly dependent on the details of the conductivity model

  3. MHD Modeling of Conductors at Ultra-High Current Density

    International Nuclear Information System (INIS)

    Rosenthal, S.E.; Asay, J.R.; Desjarlais, M.P.; Douglas, M.R.; Frese, M.H.; Hall, C.A.; Morse, R.L.; Reisman, D.; Spielman, R.B.; Stygar, W.A.

    1999-01-01

    In conjunction with ongoing high-current experiments on Sandia National Laboratories' Z accelerator we have revisited a problem first described in detail by Heinz Knoepfel. MITLs of previous pulsed power accelerators have been in the 1-Tesla regime. Z's disc transmission line (downstream of the current addition) is in a 100-1200 Tesla regime, so its conductors cannot be modeled simply as static infinite conductivity boundaries. Using the MHD code MACH2 we have been investigating conductor hydrodynamics, characterizing the joule heating, magnetic field diffusion, and material deformation, pressure, and velocity over a range of current densities, current rise-times, and conductor materials. Three purposes of this work are ( 1) to quantify power flow losses owing to ultra-high magnetic fields, (2) to model the response of VISAR diagnostic samples in various configurations on Z, and (3) to incorporate the most appropriate equation of state and conductivity models into our MHD computations. Certain features are strongly dependent on the details of the conductivity model. Comparison with measurements on Z will be discussed

  4. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements.

    Science.gov (United States)

    Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.

  5. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  6. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  7. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  8. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  9. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  10. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  11. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  12. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  15. Density-dependent electron transport and precise modeling of GaN high electron mobility transistors

    Energy Technology Data Exchange (ETDEWEB)

    Bajaj, Sanyam, E-mail: bajaj.10@osu.edu; Shoron, Omor F.; Park, Pil Sung; Krishnamoorthy, Sriram; Akyol, Fatih; Hung, Ting-Hsiang [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio 43210 (United States); Reza, Shahed; Chumbes, Eduardo M. [Raytheon Integrated Defense Systems, Andover, Massachusetts 01810 (United States); Khurgin, Jacob [Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218 (United States); Rajan, Siddharth [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio 43210 (United States); Department of Material Science and Engineering, The Ohio State University, Columbus, Ohio 43210 (United States)

    2015-10-12

    We report on the direct measurement of two-dimensional sheet charge density dependence of electron transport in AlGaN/GaN high electron mobility transistors (HEMTs). Pulsed IV measurements established increasing electron velocities with decreasing sheet charge densities, resulting in saturation velocity of 1.9 × 10{sup 7 }cm/s at a low sheet charge density of 7.8 × 10{sup 11 }cm{sup −2}. An optical phonon emission-based electron velocity model for GaN is also presented. It accommodates stimulated longitudinal optical (LO) phonon emission which clamps the electron velocity with strong electron-phonon interaction and long LO phonon lifetime in GaN. A comparison with the measured density-dependent saturation velocity shows that it captures the dependence rather well. Finally, the experimental result is applied in TCAD-based device simulator to predict DC and small signal characteristics of a reported GaN HEMT. Good agreement between the simulated and reported experimental results validated the measurement presented in this report and established accurate modeling of GaN HEMTs.

  16. Density-dependent electron transport and precise modeling of GaN high electron mobility transistors

    International Nuclear Information System (INIS)

    Bajaj, Sanyam; Shoron, Omor F.; Park, Pil Sung; Krishnamoorthy, Sriram; Akyol, Fatih; Hung, Ting-Hsiang; Reza, Shahed; Chumbes, Eduardo M.; Khurgin, Jacob; Rajan, Siddharth

    2015-01-01

    We report on the direct measurement of two-dimensional sheet charge density dependence of electron transport in AlGaN/GaN high electron mobility transistors (HEMTs). Pulsed IV measurements established increasing electron velocities with decreasing sheet charge densities, resulting in saturation velocity of 1.9 × 10 7  cm/s at a low sheet charge density of 7.8 × 10 11  cm −2 . An optical phonon emission-based electron velocity model for GaN is also presented. It accommodates stimulated longitudinal optical (LO) phonon emission which clamps the electron velocity with strong electron-phonon interaction and long LO phonon lifetime in GaN. A comparison with the measured density-dependent saturation velocity shows that it captures the dependence rather well. Finally, the experimental result is applied in TCAD-based device simulator to predict DC and small signal characteristics of a reported GaN HEMT. Good agreement between the simulated and reported experimental results validated the measurement presented in this report and established accurate modeling of GaN HEMTs

  17. How can we model selectively neutral density dependence in evolutionary games.

    Science.gov (United States)

    Argasinski, Krzysztof; Kozłowski, Jan

    2008-03-01

    The problem of density dependence appears in all approaches to the modelling of population dynamics. It is pertinent to classic models (i.e., Lotka-Volterra's), and also population genetics and game theoretical models related to the replicator dynamics. There is no density dependence in the classic formulation of replicator dynamics, which means that population size may grow to infinity. Therefore the question arises: How is unlimited population growth suppressed in frequency-dependent models? Two categories of solutions can be found in the literature. In the first, replicator dynamics is independent of background fitness. In the second type of solution, a multiplicative suppression coefficient is used, as in a logistic equation. Both approaches have disadvantages. The first one is incompatible with the methods of life history theory and basic probabilistic intuitions. The logistic type of suppression of per capita growth rate stops trajectories of selection when population size reaches the maximal value (carrying capacity); hence this method does not satisfy selective neutrality. To overcome these difficulties, we must explicitly consider turn-over of individuals dependent on mortality rate. This new approach leads to two interesting predictions. First, the equilibrium value of population size is lower than carrying capacity and depends on the mortality rate. Second, although the phase portrait of selection trajectories is the same as in density-independent replicator dynamics, pace of selection slows down when population size approaches equilibrium, and then remains constant and dependent on the rate of turn-over of individuals.

  18. Fluid and gyrokinetic modelling of particle transport in plasmas with hollow density profiles

    International Nuclear Information System (INIS)

    Tegnered, D; Oberparleiter, M; Nordman, H; Strand, P

    2016-01-01

    Hollow density profiles occur in connection with pellet fuelling and L to H transitions. A positive density gradient could potentially stabilize the turbulence or change the relation between convective and diffusive fluxes, thereby reducing the turbulent transport of particles towards the center, making the fuelling scheme inefficient. In the present work, the particle transport driven by ITG/TE mode turbulence in regions of hollow density profiles is studied by fluid as well as gyrokinetic simulations. The fluid model used, an extended version of the Weiland transport model, Extended Drift Wave Model (EDWM), incorporates an arbitrary number of ion species in a multi-fluid description, and an extended wavelength spectrum. The fluid model, which is fast and hence suitable for use in predictive simulations, is compared to gyrokinetic simulations using the code GENE. Typical tokamak parameters are used based on the Cyclone Base Case. Parameter scans in key plasma parameters like plasma β, R/L T , and magnetic shear are investigated. It is found that β in particular has a stabilizing effect in the negative R/L n region, both nonlinear GENE and EDWM show a decrease in inward flux for negative R/L n and a change of direction from inward to outward for positive R/L n . This might have serious consequences for pellet fuelling of high β plasmas. (paper)

  19. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  20. Modeling charged defects inside density functional theory band gaps

    International Nuclear Information System (INIS)

    Schultz, Peter A.; Edwards, Arthur H.

    2014-01-01

    Density functional theory (DFT) has emerged as an important tool to probe microscopic behavior in materials. The fundamental band gap defines the energy scale for charge transition energy levels of point defects in ionic and covalent materials. The eigenvalue gap between occupied and unoccupied states in conventional DFT, the Kohn–Sham gap, is often half or less of the experimental band gap, seemingly precluding quantitative studies of charged defects. Applying explicit and rigorous control of charge boundary conditions in supercells, we find that calculations of defect energy levels derived from total energy differences give accurate predictions of charge transition energy levels in Si and GaAs, unhampered by a band gap problem. The GaAs system provides a good theoretical laboratory for investigating band gap effects in defect level calculations: depending on the functional and pseudopotential, the Kohn–Sham gap can be as large as 1.1 eV or as small as 0.1 eV. We find that the effective defect band gap, the computed range in defect levels, is mostly insensitive to the Kohn–Sham gap, demonstrating it is often possible to use conventional DFT for quantitative studies of defect chemistry governing interesting materials behavior in semiconductors and oxides despite a band gap problem

  1. Element-specific density profiles in interacting biomembrane models

    International Nuclear Information System (INIS)

    Schneck, Emanuel; Rodriguez-Loureiro, Ignacio; Bertinetti, Luca; Gochev, Georgi; Marin, Egor; Novikov, Dmitri; Konovalov, Oleg

    2017-01-01

    Surface interactions involving biomembranes, such as cell–cell interactions or membrane contacts inside cells play important roles in numerous biological processes. Structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. Here, we work with simplified planar experimental models of membrane surfaces, composed of lipids and lipopolymers. Their interaction is quantified in terms of pressure–distance curves using ellipsometry at controlled dehydrating (interaction) pressures. For selected pressures, their internal structure is investigated by standing-wave x-ray fluorescence (SWXF). This technique yields specific density profiles of the chemical elements P and S belonging to lipid headgroups and polymer chains, as well as counter-ion profiles for charged surfaces. (paper)

  2. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  3. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  4. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  5. Use of a mixture statistical model in studying malaria vectors density.

    Directory of Open Access Journals (Sweden)

    Olayidé Boussari

    Full Text Available Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%. The NPMP model had a good aptitude to predict the observed values and showed that: i proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  6. A mathematical model of the maximum power density attainable in an alkaline hydrogen/oxygen fuel cell

    Science.gov (United States)

    Kimble, Michael C.; White, Ralph E.

    1991-01-01

    A mathematical model of a hydrogen/oxygen alkaline fuel cell is presented that can be used to predict the polarization behavior under various power loads. The major limitations to achieving high power densities are indicated and methods to increase the maximum attainable power density are suggested. The alkaline fuel cell model describes the phenomena occurring in the solid, liquid, and gaseous phases of the anode, separator, and cathode regions based on porous electrode theory applied to three phases. Fundamental equations of chemical engineering that describe conservation of mass and charge, species transport, and kinetic phenomena are used to develop the model by treating all phases as a homogeneous continuum.

  7. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  8. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  9. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...

  10. Predicting critical temperatures of iron(II) spin crossover materials: Density functional theory plus U approach

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yachao, E-mail: yczhang@nano.gznc.edu.cn [Guizhou Provincial Key Laboratory of Computational Nano-Material Science, Guizhou Normal College, Guiyang 550018, Guizhou (China)

    2014-12-07

    A first-principles study of critical temperatures (T{sub c}) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T{sub c} of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE{sub HL} and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T{sub c} by exploiting the ΔH/T − T and ΔS − T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T{sub c} of the two phases. This study shows the applicability of the DFT+U approach for predicting T{sub c} of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  11. Calculation of the 3D density model of the Earth

    Science.gov (United States)

    Piskarev, A.; Butsenko, V.; Poselov, V.; Savin, V.

    2009-04-01

    The study of the Earth's crust is a part of investigation aimed at extension of the Russian Federation continental shelf in the Sea of Okhotsk Gathered data allow to consider the Sea of Okhotsk' area located outside the exclusive economic zone of the Russian Federation as the natural continuation of Russian territory. The Sea of Okhotsk is an Epi-Mesozoic platform with Pre-Cenozoic heterogeneous folded basement of polycyclic development and sediment cover mainly composed of Paleocene - Neocene - Quaternary deposits. Results of processing and complex interpretation of seismic, gravity, and aeromagnetic data along profile 2-DV-M, as well as analysis of available geological and geophysical information on the Sea of Okhotsk region, allowed to calculate of the Earth crust model. 4 layers stand out (bottom-up) in structure of the Earth crust: granulite-basic (density 2.90 g/cm3), granite-gneiss (limits of density 2.60-2.76 g/cm3), volcanogenic-sedimentary (2.45 g/cm3) and sedimentary (density 2.10 g/cm3). The last one is absent on the continent; it is observed only on the water area. Density of the upper mantle is taken as 3.30 g/cm3. The observed gravity anomalies are mostly related to the surface relief of the above mentioned layers or to the density variations of the granite-metamorphic basement. So outlining of the basement blocks of different constitution preceded to the modeling. This operation is executed after Double Fourier Spectrum analysis of the gravity and magnetic anomalies and following compilation of the synthetic anomaly maps, related to the basement density and magnetic heterogeneity. According to bathymetry data, the Sea of Okhotsk can be subdivided at three mega-blocks. Taking in consideration that central Sea of Okhotsk area is aseismatic, i.e. isostatic compensated, it is obvious that Earth crust structure of these three blocks is different. The South-Okhotsk depression is characteristics by 3200-3300 m of sea depths. Moho surface in this area is at

  12. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  13. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  14. Long-term orbit prediction for Tiangong-1 spacecraft using the mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Cheng, Haowen; Hu, Songjie; Duan, Jianfeng

    2015-03-01

    China is planning to complete its first space station by 2020. For the long-term management and maintenance, the orbit of the space station needs to be predicted for a long period of time. Since the space station is expected to work in a low-Earth orbit, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 20 days, the error in the a priori atmosphere model, if not properly corrected, could induce a semi-major axis error of up to a few kilometers and an overall position error of several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSISE00. The a priori reference mean density can be corrected during the orbit determination. For the long-term orbit prediction, we use sufficiently long period of observations and obtain a series of the diurnal mean densities. This series contains the recent variation of the atmosphere density and can be analyzed for various periodic components. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. Here we carry out the test with China's Tiangong-1 spacecraft at the altitude of about 340 km and we show that this method is simple and flexible. The densities predicted with this approach can serve in the long-term orbit prediction. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700 m and overall position errors better than 400 km.

  15. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  16. Density Functional Theory Modeling of Ferrihydrite Nanoparticle Adsorption Behavior

    Science.gov (United States)

    Kubicki, J.

    2016-12-01

    Ferrihydrite is a critical substrate for adsorption of oxyanion species in the environment1. The nanoparticulate nature of ferrihydrite is inherent to its formation, and hence it has been called a "nano-mineral"2. The nano-scale size and unusual composition of ferrihydrite has made structural determination of this phase problematic. Michel et al.3 have proposed an atomic structure for ferrihydrite, but this model has been controversial4,5. Recent work has shown that the Michel et al.3 model structure may be reasonably accurate despite some deficiencies6-8. An alternative model has been proposed by Manceau9. This work utilizes density functional theory (DFT) calculations to model both the structure of ferrihydrite nanoparticles based on the Michel et al. 3 model as refined in Hiemstra8 and the modified akdalaite model of Manceau9. Adsorption energies of carbonate, phosphate, sulfate, chromate, arsenite and arsenate are calculated. Periodic projector-augmented planewave calculations were performed with the Vienna Ab-initio Simulation Package (VASP10) on an approximately 1.7 nm diameter Michel nanoparticle (Fe38O112H110) and on a 2 nm Manceau nanoparticle (Fe38O95H76). After energy minimization of the surface H and O atoms. The model will be used to assess the possible configurations of adsorbed oxyanions on the model nanoparticles. Brown G.E. Jr. and Calas G. (2012) Geochemical Perspectives, 1, 483-742. Hochella M.F. and Madden A.S. (2005) Elements, 1, 199-203. Michel, F.M., Ehm, L., Antao, S.M., Lee, P.L., Chupas, P.J., Liu, G., Strongin, D.R., Schoonen, M.A.A., Phillips, B.L., and Parise, J.B., 2007, Science, 316, 1726-1729. Rancourt, D.G., and Meunier, J.F., 2008, American Mineralogist, 93, 1412-1417. Manceau, A., 2011, American Mineralogist, 96, 521-533. Maillot, F., Morin, G., Wang, Y., Bonnin, D., Ildefonse, P., Chaneac, C., Calas, G., 2011, Geochimica et Cosmochimica Acta, 75, 2708-2720. Pinney, N., Kubicki, J.D., Middlemiss, D.S., Grey, C.P., and Morgan, D

  17. Calibration plots for risk prediction models in the presence of competing risks

    DEFF Research Database (Denmark)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-01-01

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves...

  18. Conifer density within lake catchments predicts fish mercury concentrations in remote subalpine lakes

    Science.gov (United States)

    Eagles-Smith, Collin A.; Herring, Garth; Johnson, Branden L.; Graw, Rick

    2016-01-01

    Remote high-elevation lakes represent unique environments for evaluating the bioaccumulation of atmospherically deposited mercury through freshwater food webs, as well as for evaluating the relative importance of mercury loading versus landscape influences on mercury bioaccumulation. The increase in mercury deposition to these systems over the past century, coupled with their limited exposure to direct anthropogenic disturbance make them useful indicators for estimating how changes in mercury emissions may propagate to changes in Hg bioaccumulation and ecological risk. We evaluated mercury concentrations in resident fish from 28 high-elevation, sub-alpine lakes in the Pacific Northwest region of the United States. Fish total mercury (THg) concentrations ranged from 4 to 438 ng/g wet weight, with a geometric mean concentration (±standard error) of 43 ± 2 ng/g ww. Fish THg concentrations were negatively correlated with relative condition factor, indicating that faster growing fish that are in better condition have lower THg concentrations. Across the 28 study lakes, mean THg concentrations of resident salmonid fishes varied as much as 18-fold among lakes. We used a hierarchal statistical approach to evaluate the relative importance of physiological, limnological, and catchment drivers of fish Hg concentrations. Our top statistical model explained 87% of the variability in fish THg concentrations among lakes with four key landscape and limnological variables: catchment conifer density (basal area of conifers within a lake's catchment), lake surface area, aqueous dissolved sulfate, and dissolved organic carbon. Conifer density within a lake's catchment was the most important variable explaining fish THg concentrations across lakes, with THg concentrations differing by more than 400 percent across the forest density spectrum. These results illustrate the importance of landscape characteristics in controlling mercury bioaccumulation in fish.

  19. A density model based on the Modified Quasichemical Model and applied to the (NaCl + KCl + ZnCl2) liquid

    International Nuclear Information System (INIS)

    Ouzilleau, Philippe; Robelin, Christian; Chartrand, Patrice

    2012-01-01

    Highlights: ► A model for the density of multicomponent inorganic liquids. ► The density model is based on the Modified Quasichemical Model. ► Application to the (NaCl + KCl + ZnCl 2 ) ternary liquid. ► A Kohler–Toop-like asymmetric interpolation method was used. - Abstract: A theoretical model for the density of multicomponent inorganic liquids based on the Modified Quasichemical Model has been presented previously. By introducing in the Gibbs free energy of the liquid phase temperature-dependent molar volume expressions for the pure components and pressure-dependent excess parameters for the binary (and sometimes higher-order) interactions, it is possible to reproduce, and eventually predict, the molar volume and the density of the multicomponent liquid phase using standard interpolation methods. In the present article, this density model is applied to the (NaCl + KCl + ZnCl 2 ) ternary liquid and a Kohler–Toop-like asymmetric interpolation method is used. All available density data for the (NaCl + KCl + ZnCl 2 ) liquid were collected and critically evaluated, and optimized pressure-dependent model parameters have been found. This new volumetric model can be used with Gibbs free energy minimization software, to calculate the molar volume and the density of (NaCl + KCl + ZnCl 2 ) ternary melts.

  20. Dispersion corrected hartree-fock and density functional theory for organic crystal structure prediction.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Grimme, Stefan

    2014-01-01

    We present and evaluate dispersion corrected Hartree-Fock (HF) and Density Functional Theory (DFT) based quantum chemical methods for organic crystal structure prediction. The necessity of correcting for missing long-range electron correlation, also known as van der Waals (vdW) interaction, is pointed out and some methodological issues such as inclusion of three-body dispersion terms are discussed. One of the most efficient and widely used methods is the semi-classical dispersion correction D3. Its applicability for the calculation of sublimation energies is investigated for the benchmark set X23 consisting of 23 small organic crystals. For PBE-D3 the mean absolute deviation (MAD) is below the estimated experimental uncertainty of 1.3 kcal/mol. For two larger π-systems, the equilibrium crystal geometry is investigated and very good agreement with experimental data is found. Since these calculations are carried out with huge plane-wave basis sets they are rather time consuming and routinely applicable only to systems with less than about 200 atoms in the unit cell. Aiming at crystal structure prediction, which involves screening of many structures, a pre-sorting with faster methods is mandatory. Small, atom-centered basis sets can speed up the computation significantly but they suffer greatly from basis set errors. We present the recently developed geometrical counterpoise correction gCP. It is a fast semi-empirical method which corrects for most of the inter- and intramolecular basis set superposition error. For HF calculations with nearly minimal basis sets, we additionally correct for short-range basis incompleteness. We combine all three terms in the HF-3c denoted scheme which performs very well for the X23 sublimation energies with an MAD of only 1.5 kcal/mol, which is close to the huge basis set DFT-D3 result.

  1. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  2. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  3. Towards predictive models for transitionally rough surfaces

    Science.gov (United States)

    Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo

    2017-11-01

    We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).

  4. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values

    OpenAIRE

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    Objective: The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. Materials and Methods: CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were g...

  5. A unified model of density limit in fusion plasmas

    Science.gov (United States)

    Zanca, P.; Sattin, F.; Escande, D. F.; Pucella, G.; Tudisco, O.

    2017-05-01

    In this work we identify by analytical and numerical means the conditions for the existence of a magnetic and thermal equilibrium of a cylindrical plasma, in the presence of Ohmic and/or additional power sources, heat conduction and radiation losses by light impurities. The boundary defining the solutions’ space having realistic temperature profile with small edge value takes mathematically the form of a density limit (DL). Compared to previous similar analyses the present work benefits from dealing with a more accurate set of equations. This refinement is elementary, but decisive, since it discloses a tenuous dependence of the DL on the thermal transport for configurations with an applied electric field. Thanks to this property, the DL scaling law is recovered almost identical for two largely different devices such as the ohmic tokamak and the reversed field pinch. In particular, they have in common a Greenwald scaling, linearly depending on the plasma current, quantitatively consistent with experimental results. In the tokamak case the DL dependence on any additional heating approximately follows a 0.5 power law, which is compatible with L-mode experiments. For a purely externally heated configuration, taken as a cylindrical approximation of the stellarator, the DL dependence on transport is found stronger. By adopting suitable transport models, DL takes on a Sudo-like form, in fair agreement with LHD experiments. Overall, the model provides a good zeroth-order quantitative description of the DL, applicable to widely different configurations.

  6. Coronary Artery Calcium Volume and Density: Potential Interactions and Overall Predictive Value: The Multi-Ethnic Study of Atherosclerosis.

    Science.gov (United States)

    Criqui, Michael H; Knox, Jessica B; Denenberg, Julie O; Forbang, Nketi I; McClelland, Robyn L; Novotny, Thomas E; Sandfort, Veit; Waalen, Jill; Blaha, Michael J; Allison, Matthew A

    2017-08-01

    This study sought to determine the possibility of interactions between coronary artery calcium (CAC) volume or CAC density with each other, and with age, sex, ethnicity, the new atherosclerotic cardiovascular disease (ASCVD) risk score, diabetes status, and renal function by estimated glomerular filtration rate, and, using differing CAC scores, to determine the improvement over the ASCVD risk score in risk prediction and reclassification. In MESA (Multi-Ethnic Study of Atherosclerosis), CAC volume was positively and CAC density inversely associated with cardiovascular disease (CVD) events. A total of 3,398 MESA participants free of clinical CVD but with prevalent CAC at baseline were followed for incident CVD events. During a median 11.0 years of follow-up, there were 390 CVD events, 264 of which were coronary heart disease (CHD). With each SD increase of ln CAC volume (1.62), risk of CHD increased 73% (p present). In multivariable Cox models, significant interactions were present for CAC volume with age and ASCVD risk score for both CHD and CVD, and CAC density with ASCVD risk score for CVD. Hazard ratios were generally stronger in the lower risk groups. Receiver-operating characteristic area under the curve and Net Reclassification Index analyses showed better prediction by CAC volume than by Agatston, and the addition of CAC density to CAC volume further significantly improved prediction. The inverse association between CAC density and incident CHD and CVD events is robust across strata of other CVD risk factors. Added to the ASCVD risk score, CAC volume and density provided the strongest prediction for CHD and CVD events, and the highest correct reclassification. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  7. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  8. Triglycerides to High-Density Lipoprotein Cholesterol Ratio Can Predict Impaired Glucose Tolerance in Young Women with Polycystic Ovary Syndrome.

    Science.gov (United States)

    Song, Do Kyeong; Lee, Hyejin; Sung, Yeon Ah; Oh, Jee Young

    2016-11-01

    The triglycerides to high-density lipoprotein cholesterol (TG/HDL-C) ratio could be related to insulin resistance (IR). We previously reported that Korean women with polycystic ovary syndrome (PCOS) had a high prevalence of impaired glucose tolerance (IGT). We aimed to determine the cutoff value of the TG/HDL-C ratio for predicting IR and to examine whether the TG/HDL-C ratio is useful for identifying individuals at risk of IGT in young Korean women with PCOS. We recruited 450 women with PCOS (24±5 yrs) and performed a 75-g oral glucose tolerance test (OGTT). IR was assessed by a homeostasis model assessment index over that of the 95th percentile of regular-cycling women who served as the controls (n=450, 24±4 yrs). The cutoff value of the TG/HDL-C ratio for predicting IR was 2.5 in women with PCOS. Among the women with PCOS who had normal fasting glucose (NFG), the prevalence of IGT was significantly higher in the women with PCOS who had a high TG/HDL-C ratio compared with those with a low TG/HDL-C ratio (15.6% vs. 5.6%, p2.5 are recommended to be administered an OGTT to detect IGT even if they have NFG.

  9. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  10. AUC-based biomarker ensemble with an application on gene scores predicting low bone mineral density.

    Science.gov (United States)

    Zhao, X G; Dai, W; Li, Y; Tian, L

    2011-11-01

    The area under the receiver operating characteristic (ROC) curve (AUC), long regarded as a 'golden' measure for the predictiveness of a continuous score, has propelled the need to develop AUC-based predictors. However, the AUC-based ensemble methods are rather scant, largely due to the fact that the associated objective function is neither continuous nor concave. Indeed, there is no reliable numerical algorithm identifying optimal combination of a set of biomarkers to maximize the AUC, especially when the number of biomarkers is large. We have proposed a novel AUC-based statistical ensemble methods for combining multiple biomarkers to differentiate a binary response of interest. Specifically, we propose to replace the non-continuous and non-convex AUC objective function by a convex surrogate loss function, whose minimizer can be efficiently identified. With the established framework, the lasso and other regularization techniques enable feature selections. Extensive simulations have demonstrated the superiority of the new methods to the existing methods. The proposal has been applied to a gene expression dataset to construct gene expression scores to differentiate elderly women with low bone mineral density (BMD) and those with normal BMD. The AUCs of the resulting scores in the independent test dataset has been satisfactory. Aiming for directly maximizing AUC, the proposed AUC-based ensemble method provides an efficient means of generating a stable combination of multiple biomarkers, which is especially useful under the high-dimensional settings. lutian@stanford.edu. Supplementary data are available at Bioinformatics online.

  11. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  12. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    International Nuclear Information System (INIS)

    Vasina, P; Hytkova, T; Elias, M

    2009-01-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  13. Data driven propulsion system weight prediction model

    Science.gov (United States)

    Gerth, Richard J.

    1994-10-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  14. Predictive modeling of emergency cesarean delivery.

    Directory of Open Access Journals (Sweden)

    Carlos Campillo-Artero

    Full Text Available To increase discriminatory accuracy (DA for emergency cesarean sections (ECSs.We prospectively collected data on and studied all 6,157 births occurring in 2014 at four public hospitals located in three different autonomous communities of Spain. To identify risk factors (RFs for ECS, we used likelihood ratios and logistic regression, fitted a classification tree (CTREE, and analyzed a random forest model (RFM. We used the areas under the receiver-operating-characteristic (ROC curves (AUCs to assess their DA.The magnitude of the LR+ for all putative individual RFs and ORs in the logistic regression models was low to moderate. Except for parity, all putative RFs were positively associated with ECS, including hospital fixed-effects and night-shift delivery. The DA of all logistic models ranged from 0.74 to 0.81. The most relevant RFs (pH, induction, and previous C-section in the CTREEs showed the highest ORs in the logistic models. The DA of the RFM and its most relevant interaction terms was even higher (AUC = 0.94; 95% CI: 0.93-0.95.Putative fetal, maternal, and contextual RFs alone fail to achieve reasonable DA for ECS. It is the combination of these RFs and the interactions between them at each hospital that make it possible to improve the DA for the type of delivery and tailor interventions through prediction to improve the appropriateness of ECS indications.

  15. Prediction of d^0 magnetism in self-interaction corrected density functional theory

    Science.gov (United States)

    Das Pemmaraju, Chaitanya

    2010-03-01

    Over the past couple of years, the phenomenon of ``d^0 magnetism'' has greatly intrigued the magnetism community [1]. Unlike conventional magnetic materials, ``d^0 magnets'' lack any magnetic ions with open d or f shells but surprisingly, exhibit signatures of ferromagnetism often with a Curie temperature exceeding 300 K. Current research in the field is geared towards trying to understand the mechanism underlying this observed ferromagnetism which is difficult to explain within the conventional m-J paradigm [1]. The most widely studied class of d^0 materials are un-doped and light element doped wide gap Oxides such as HfO2, MgO, ZnO, TiO2 all of which have been put forward as possible d0 ferromagnets. General experimental trends suggest that the magnetism is a feature of highly defective samples leading to the expectation that the phenomenon must be defect related. In particular, based on density functional theory (DFT) calculations acceptor defects formed from the O-2p states in these Oxides have been proposed as being responsible for the ferromagnetism [2,3]. However. predicting magnetism originating from 2p orbitals is a delicate problem, which depends on the subtle interplay between covalency and Hund's coupling. DFT calculations based on semi-local functionals such as the local spin-density approximation (LSDA) can lead to qualitative failures on several fronts. On one hand the excessive delocalization of spin-polarized holes leads to half-metallic ground states and the expectation of room-temperature ferromagnetism. On the other hand, in some cases a magnetic ground state may not be predicted at all as the Hund's coupling might be under estimated. Furthermore, polaronic distortions which are often a feature of acceptor defects in Oxides are not predicted [4,5]. In this presentation, we argue that the self interaction error (SIE) inherent to semi-local functionals is responsible for the failures of LSDA and demonstrate through various examples that beyond

  16. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  17. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast-growing Eucalyptus forest plantation using airborne LiDAR data.

    Science.gov (United States)

    Silva, Carlos Alberto; Hudak, Andrew Thomas; Klauberg, Carine; Vierling, Lee Alexandre; Gonzalez-Benecke, Carlos; de Padua Chaves Carvalho, Samuel; Rodriguez, Luiz Carlos Estraviz; Cardil, Adrián

    2017-12-01

    LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m -2 (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m. The results show that LiDAR pulse density of 5 pulses m -2 provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m -2 in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system. LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m -2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.

  18. Improving the description of collective effects within the combinatorial model of nuclear level densities

    International Nuclear Information System (INIS)

    Hilaire, S.; Girod, M.; Goriely, S.

    2011-01-01

    The combinatorial model of nuclear level densities has now reached a level of accuracy comparable to that of the best global analytical expressions without suffering from the limits imposed by the statistical hypothesis on which the latter expressions rely. In particular, it provides naturally, non Gaussian spin distribution as well as non equipartition of parities which are known to have a significant impact on cross section predictions at low energies. Our first global model developed in Ref. 1 suffered from deficiencies, in particular in the way the collective effects - both vibrational and rotational - were treated. We have recently improved this treatment using simultaneously the single particle levels and collective properties predicted by a newly derived Gogny interaction, therefore enabling a microscopic description of energy-dependent shell, pairing and deformation effects. In addition, for deformed nuclei, the transition to sphericity is coherently taken into account on the basis of a temperature-dependent Hartree-Fock calculation which provides at each temperature the structure properties needed to build the level densities. This new method is described and shown to give promising preliminary results with respect to available experimental data. (authors)

  19. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  20. Model-compared RGU-photometric space-densities in the direction to M 5 (l = 40, b = +470)

    International Nuclear Information System (INIS)

    Fenkart, R.; Karaali, S.

    1990-01-01

    In the process of rounding off the results homogeneously obtained within the model-comparison phase of the Basle Halo Program, space densities of both photometric populations, I and II, have been derived, for late-type giants and for main-sequence stars with +3 m m , in a field close to the globular cluster M 5, according to the RGU-photometric Basle method. Compared to the density gradients predicted by the standard set of five multi-component models, used since the beginning of this phase, they confirm the existence of a Galactic Thick Disk component, in this direction, too

  1. Inverse modeling with RZWQM2 to predict water quality

    Science.gov (United States)

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  2. Predicting local dengue transmission in Guangzhou, China, through the influence of imported cases, mosquito density and climate variability.

    Directory of Open Access Journals (Sweden)

    Shaowei Sang

    Full Text Available Each year there are approximately 390 million dengue infections worldwide. Weather variables have a significant impact on the transmission of Dengue Fever (DF, a mosquito borne viral disease. DF in mainland China is characterized as an imported disease. Hence it is necessary to explore the roles of imported cases, mosquito density and climate variability in dengue transmission in China. The study was to identify the relationship between dengue occurrence and possible risk factors and to develop a predicting model for dengue's control and prevention purpose.Three traditional suburbs and one district with an international airport in Guangzhou city were selected as the study areas. Autocorrelation and cross-correlation analysis were used to perform univariate analysis to identify possible risk factors, with relevant lagged effects, associated with local dengue cases. Principal component analysis (PCA was applied to extract principal components and PCA score was used to represent the original variables to reduce multi-collinearity. Combining the univariate analysis and prior knowledge, time-series Poisson regression analysis was conducted to quantify the relationship between weather variables, Breteau Index, imported DF cases and the local dengue transmission in Guangzhou, China. The goodness-of-fit of the constructed model was determined by pseudo-R2, Akaike information criterion (AIC and residual test. There were a total of 707 notified local DF cases from March 2006 to December 2012, with a seasonal distribution from August to November. There were a total of 65 notified imported DF cases from 20 countries, with forty-six cases (70.8% imported from Southeast Asia. The model showed that local DF cases were positively associated with mosquito density, imported cases, temperature, precipitation, vapour pressure and minimum relative humidity, whilst being negatively associated with air pressure, with different time lags.Imported DF cases and mosquito

  3. Finite Unification: Theory, Models and Predictions

    CERN Document Server

    Heinemeyer, S; Zoupanos, G

    2011-01-01

    All-loop Finite Unified Theories (FUTs) are very interesting N=1 supersymmetric Grand Unified Theories (GUTs) realising an old field theory dream, and moreover have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensional couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory, too. Based on the above theoretical framework phenomenologically consistent FUTs have been constructed. Here we review FUT models based on the SU(5) and SU(3)^3 gauge groups and their predictions. Of particular interest is the Hig...

  4. Modelling risk of tick exposure in southern Scandinavia using machine learning techniques, satellite imagery, and human population density maps

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    30 sites (forests and meadows) in each of Denmark, southern Norway and south-eastern Sweden. At each site we measured presence/absence of ticks, and used the data obtained along with environmental satellite images to run Boosted Regression Tree machine learning algorithms to predict overall spatial...... and Sweden), areas with high population densities tend to overlap with these zones.Machine learning techniques allow us to predict for larger areas without having to perform extensive sampling all over the region in question, and we were able to produce models and maps with high predictive value. The results...

  5. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  6. Regional-scale Predictions of Agricultural N Losses in an Area with a High Livestock Density

    Directory of Open Access Journals (Sweden)

    Carlo Grignani

    2011-02-01

    Full Text Available The quantification of the N losses in territories characterised by intensive animal stocking is of primary importance. The development of simulation models coupled to a GIS, or of simple environmental indicators, is strategic to suggest the best specific management practices. The aims of this work were: a to couple a GIS to a simulation model in order to predict N losses; b to estimate leaching and gaseous N losses from a territory with intensive livestock farming; c to derive a simplified empirical metamodel from the model output that could be used to rank the relative importance of the variables which influence N losses and to extend the results to homogeneous situations. The work was carried out in a 7773 ha area in the Western Po plain in Italy. This area was chosen because it is characterised by intensive animal husbandry and might soon be included in the nitrate vulnerable zones. The high N load, the shallow water table and the coarse type of sub-soil sediments contribute to the vulnerability to N leaching. A CropSyst simulation model was coupled to a GIS, to account for the soil surface N budget. A linear multiple regression approach was used to describe the influence of a series of independent variables on the N leaching, the N gaseous losses (including volatilisation and denitrification and on the sum of the two. Despite the fact that the available GIS was very detailed, a great deal of information necessary to run the model was lacking. Further soil measurements concerning soil hydrology, soil nitrate content and water table depth proved very valuable to integrate the data contained in the GIS in order to produce reliable input for the model. The results showed that the soils influence both the quantity and the pathways of the N losses to a great extent. The ratio between the N losses and the N supplied varied between 20 and 38%. The metamodel shows that manure input always played the most important role in determining the N losses

  7. Regional-scale Predictions of Agricultural N Losses in an Area with a High Livestock Density

    Directory of Open Access Journals (Sweden)

    Dario Sacco

    2006-12-01

    Full Text Available The quantification of the N losses in territories characterised by intensive animal stocking is of primary importance. The development of simulation models coupled to a GIS, or of simple environmental indicators, is strategic to suggest the best specific management practices. The aims of this work were: a to couple a GIS to a simulation model in order to predict N losses; b to estimate leaching and gaseous N losses from a territory with intensive livestock farming; c to derive a simplified empirical metamodel from the model output that could be used to rank the relative importance of the variables which influence N losses and to extend the results to homogeneous situations. The work was carried out in a 7773 ha area in the Western Po plain in Italy. This area was chosen because it is characterised by intensive animal husbandry and might soon be included in the nitrate vulnerable zones. The high N load, the shallow water table and the coarse type of sub-soil sediments contribute to the vulnerability to N leaching. A CropSyst simulation model was coupled to a GIS, to account for the soil surface N budget. A linear multiple regression approach was used to describe the influence of a series of independent variables on the N leaching, the N gaseous losses (including volatilisation and denitrification and on the sum of the two. Despite the fact that the available GIS was very detailed, a great deal of information necessary to run the model was lacking. Further soil measurements concerning soil hydrology, soil nitrate content and water table depth proved very valuable to integrate the data contained in the GIS in order to produce reliable input for the model. The results showed that the soils influence both the quantity and the pathways of the N losses to a great extent. The ratio between the N losses and the N supplied varied between 20 and 38%. The metamodel shows that manure input always played the most important role in determining the N losses

  8. Neutrino nucleosynthesis in supernovae: Shell model predictions

    International Nuclear Information System (INIS)

    Haxton, W.C.

    1989-01-01

    Almost all of the 3 · 10 53 ergs liberated in a core collapse supernova is radiated as neutrinos by the cooling neutron star. I will argue that these neutrinos interact with nuclei in the ejected shells of the supernovae to produce new elements. It appears that this nucleosynthesis mechanism is responsible for the galactic abundances of 7 Li, 11 B, 19 F, 138 La, and 180 Ta, and contributes significantly to the abundances of about 15 other light nuclei. I discuss shell model predictions for the charged and neutral current allowed and first-forbidden responses of the parent nuclei, as well as the spallation processes that produce the new elements. 18 refs., 1 fig., 1 tab

  9. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  10. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  11. Model predictive control of a wind turbine modelled in Simpack

    International Nuclear Information System (INIS)

    Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G

    2014-01-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine

  12. Model predictive control of a wind turbine modelled in Simpack

    Science.gov (United States)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  13. A new model for prediction of dispersoid precipitation in aluminium alloys containing zirconium and scandium

    International Nuclear Information System (INIS)

    Robson, J.D.

    2004-01-01

    A model has been developed to predict precipitation of ternary Al 3 (Sc, Zr) dispersoids in aluminium alloys containing zirconium and scandium. The model is based on the classical numerical method of Kampmann and Wagner, extended to predict precipitation of a ternary phase. The model has been applied to the precipitation of dispersoids in scandium containing AA7050. The dispersoid precipitation kinetics and number density are predicted to be sensitive to the scandium concentration, whilst the dispersoid radius is not. The dispersoids are predicted to enrich in zirconium during precipitation. Coarsening has been investigated in detail and it has been predicted that a steady-state size distribution is only reached once coarsening is well advanced. The addition of scandium is predicted to eliminate the dispersoid free zones observed in scandium free 7050, greatly increasing recrystallization resistance

  14. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  15. A New Approach to Modeling Densities and Equilibria of Ice and Gas Hydrate Phases

    Science.gov (United States)

    Zyvoloski, G.; Lucia, A.; Lewis, K. C.

    2011-12-01

    The Gibbs-Helmholtz Constrained (GHC) equation is a new cubic equation of state that was recently derived by Lucia (2010) and Lucia et al. (2011) by constraining the energy parameter in the Soave form of the Redlich-Kwong equation to satisfy the Gibbs-Helmholtz equation. The key attributes of the GHC equation are: 1) It is a multi-scale equation because it uses the internal energy of departure, UD, as a natural bridge between the molecular and bulk phase length scales. 2) It does not require acentric factors, volume translation, regression of parameters to experimental data, binary (kij) interaction parameters, or other forms of empirical correlations. 3) It is a predictive equation of state because it uses a database of values of UD determined from NTP Monte Carlo simulations. 4) It can readily account for differences in molecular size and shape. 5) It has been successfully applied to non-electrolyte mixtures as well as weak and strong aqueous electrolyte mixtures over wide ranges of temperature, pressure and composition to predict liquid density and phase equilibrium with up to four phases. 6) It has been extensively validated with experimental data. 7) The AAD% error between predicted and experimental liquid density is 1% while the AAD% error in phase equilibrium predictions is 2.5%. 8) It has been used successfully within the subsurface flow simulation program FEHM. In this work we describe recent extensions of the multi-scale predictive GHC equation to modeling the phase densities and equilibrium behavior of hexagonal ice and gas hydrates. In particular, we show that radial distribution functions, which can be determined by NTP Monte Carlo simulations, can be used to establish correct standard state fugacities of 1h ice and gas hydrates. From this, it is straightforward to determine both the phase density of ice or gas hydrates as well as any equilibrium involving ice and/or hydrate phases. A number of numerical results for mixtures of N2, O2, CH4, CO2, water

  16. Using synchronization in multi-model ensembles to improve prediction

    Science.gov (United States)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of

  17. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    Science.gov (United States)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  18. Novel modeling of combinatorial miRNA targeting identifies SNP with potential role in bone density.

    Directory of Open Access Journals (Sweden)

    Claudia Coronnello

    Full Text Available MicroRNAs (miRNAs are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting, a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential

  19. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  20. Numerical prediction of a dip effect in the critical current density

    International Nuclear Information System (INIS)

    Al Khawaja, U.; Benkraouda, M.; Obaidat, I.M.

    2007-01-01

    We have conducted extensive series of molecular dynamic simulations on the properties of the critical current density in systems with periodic square arrays of pinning sites. The density of the pinning sites was kept fixed while the density of vortices, pinning strength, and temperature were varied several times. At zero temperature, we have observed a substantial dip in the critical current density that occurs only at a fixed value of the vortex density and for specific values of pinning strength. We have found that the occurrence of the dip depends mainly on the initial positions of the vortices with respect to the positions of the pinning sites. At the dip, we have found that the interstitial vortices form moving channels leading to the observed drop in the critical current density

  1. Conditional Density Models Integrating Fuzzy and Probabilistic Representations of Uncertainty

    NARCIS (Netherlands)

    R.J. Almeida e Santos Nogueira (Rui Jorge)

    2014-01-01

    markdownabstract__Abstract__ Conditional density estimation is an important problem in a variety of areas such as system identification, machine learning, artificial intelligence, empirical economics, macroeconomic analysis, quantitative finance and risk management. This work considers the

  2. Model Insensitive and Calibration Independent Method for Determination of the Downstream Neutral Hydrogen Density Through Ly-alpha Glow Observations

    Science.gov (United States)

    Gangopadhyay, P.; Judge, D. L.

    1996-01-01

    Our knowledge of the various heliospheric phenomena (location of the solar wind termination shock, heliopause configuration and very local interstellar medium parameters) is limited by uncertainties in the available heliospheric plasma models and by calibration uncertainties in the observing instruments. There is, thus, a strong motivation to develop model insensitive and calibration independent methods to reduce the uncertainties in the relevant heliospheric parameters. We have developed such a method to constrain the downstream neutral hydrogen density inside the heliospheric tail. In our approach we have taken advantage of the relative insensitivity of the downstream neutral hydrogen density profile to the specific plasma model adopted. We have also used the fact that the presence of an asymmetric neutral hydrogen cavity surrounding the sun, characteristic of all neutral densities models, results in a higher multiple scattering contribution to the observed glow in the downstream region than in the upstream region. This allows us to approximate the actual density profile with one which is spatially uniform for the purpose of calculating the downstream backscattered glow. Using different spatially constant density profiles, radiative transfer calculations are performed, and the radial dependence of the predicted glow is compared with the observed I/R dependence of Pioneer 10 UV data. Such a comparison bounds the large distance heliospheric neutral hydrogen density in the downstream direction to a value between 0.05 and 0.1/cc.

  3. Viscosity and density models for copper electrorefining electrolytes

    OpenAIRE

    Kalliomäki Taina; Aji Arif T.; Aromaa Jari; Lundström Mari

    2016-01-01

    Viscosity and density are highly important physicochemical properties of copper electrolyte since they affect the purity of cathode copper and energy consumption [1, 2] affecting the mass and heat transfer conditions in the cell [3]. Increasing viscosity and density decreases the rate in which the anode slime falls to the bottom of the cell [4, 5] and lowers the diffusion coefficient of cupric ion (DCu2+) [6]. Decreasing the falling rate of anode slime increases movement of the slime to other...

  4. An LTE implementation based on a road traffic density model

    OpenAIRE

    Attaullah, Muhammad

    2013-01-01

    The increase in vehicular traffic has created new challenges in determining the behavior of performance of data and safety measures in traffic. Hence, traffic signals on intersection used as cost effective and time saving tools for traffic management in urban areas. But on the other hand the signalized intersections in congested urban areas are the key source of high traffic density and slow traffic. High traffic density causes the slow network traffic data rate between vehicle to vehicle and...

  5. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Jacques [Univ. of Regensburg (Germany). Inst. for Theorectical Physics; Glesaan, Jonas [Swansea Univ., Swansea U.K.; Verbaarschot, Jacobus [Stony Brook Univ., NY (United States). Dept. of Physics and Astronomy; Zafeiropoulos, Savvas [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Heidelberg Univ. (Germany). Inst. for Theoretische Physik

    2018-04-01

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.

  6. Bone mineral density predicts posttransplant survival among hepatocellular carcinoma liver transplant recipients.

    Science.gov (United States)

    Sharma, Pratima; Parikh, Neehar D; Yu, Jessica; Barman, Pranab; Derstine, Brian A; Sonnenday, Christopher J; Wang, Stewart C; Su, Grace L

    2016-08-01

    Hepatocellular carcinoma (HCC) is a common indication for liver transplantation (LT). Recent data suggest that body composition features strongly affect post-LT mortality. We examined the impact of body composition on post-LT mortality in patients with HCC. Data on adult LT recipients who received Model for End-Stage Liver Disease exception for HCC between February 29, 2002, and December 31, 2013, and who had a computed tomography (CT) scan any time 6 months prior to LT were reviewed (n = 118). All available CT scan Digital Imaging and Communication in Medicine files were analyzed using a semiautomated high throughput methodology with algorithms programmed in MATLAB. Analytic morphomics measurements including dorsal muscle group (DMG) area, visceral and subcutaneous fat, and bone mineral density (BMD) were taken at the bottom of the eleventh thoracic vertebral level. Thirty-two (27%) patients died during the median follow-up of 4.4 years. The number of HCC lesions (hazard ratio [HR], 2.81; P DMG area did not affect post-LT survival. In conclusion, in addition to number of HCC lesions and pre-LT locoregional therapy, low BMD, a surrogate for bone loss rather than DMG area, was independently associated with post-LT mortality in HCC patients. Bone loss may be an early marker of deconditioning that precedes sarcopenia and may affect transplant outcomes. Liver Transplantation 22 1092-1098 2016 AASLD. © 2016 American Association for the Study of Liver Diseases.

  7. Modeling of Materials for Energy Storage: A Challenge for Density Functional Theory

    Science.gov (United States)

    Kaltak, Merzuk; Fernandez-Serra, Marivi; Hybertsen, Mark S.

    Hollandite α-MnO2 is a promising material for rechargeable batteries and is studied extensively in the community because of its interesting tunnel structure and the corresponding large capacity for lithium as well as sodium ions. However, the presence of partially reduced Mn ions due to doping with Ag or during lithiation makes hollandite a challenging system for density functional theory and the conventionally employed PBE+U method. A naive attempt to model the ternary system LixAgyMnO2 with density functionals, similar to those employed for the case y = 0 , fails and predicts a strong monoclinic distortion of the experimentally observed tetragonal unit cell for Ag2Mn8O16. Structure and binding energies are compared with experimental data and show the importance of van der Waals interactions as well as the necessity for an accurate description of the cooperative Jan-Teller effects for silver hollandite AgyMnO2. Based on these observations a ternary phase diagram is calculated allowing to predict the physical and chemical properties of LixAgyMnO2, such as stable stoichiometries, open circuit voltages, the formation of Ag metal and the structural change during lithiation. Department of Energy (DOE) under award #DE-SC0012673.

  8. Information density converges in dialogue: Towards an information-theoretic model.

    Science.gov (United States)

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  10. Coupled hygrothermal, electrochemical, and mechanical modelling for deterioration prediction in reinforced cementitious materials

    DEFF Research Database (Denmark)

    Michel, Alexander; Geiker, Mette Rica; Lepech, M.

    2017-01-01

    In this paper a coupled hygrothermal, electrochemical, and mechanical modelling approach for the deterioration prediction in cementitious materials is briefly outlined. Deterioration prediction is thereby based on coupled modelling of (i) chemical processes including among others transport of hea......, i.e. information, such as such as corrosion current density, damage state of concrete cover, etc., are constantly exchanged between the models....... and matter as well as phase assemblage on the nano and micro scale, (ii) corrosion of steel including electrochemical processes at the reinforcement surface, and (iii) material performance including corrosion- and load-induced damages on the meso and macro scale. The individual FEM models are fully coupled...

  11. Increased consumer density reduces the strength of neighborhood effects in a model system.

    Science.gov (United States)

    Merwin, Andrew C; Underwood, Nora; Inouye, Brian D

    2017-11-01

    An individual's susceptibility to attack can be influenced by conspecific and heterospecifics neighbors. Predicting how these neighborhood effects contribute to population-level processes such as competition and evolution requires an understanding of how the strength of neighborhood effects is modified by changes in the abundances of both consumers and neighboring resource species. We show for the first time that consumer density can interact with the density and frequency of neighboring organisms to determine the magnitude of neighborhood effects. We used the bean beetle, Callosobruchus maculatus, and two of its host beans, Vigna unguiculata and V. radiata, to perform a response-surface experiment with a range of resource densities and three consumer densities. At low beetle density, damage to beans was reduced with increasing conspecific density (i.e., resource dilution) and damage to the less preferred host, V. unguiculata, was reduced with increasing V. radiata frequency (i.e., frequency-dependent associational resistance). As beetle density increased, however, neighborhood effects were reduced; at the highest beetle densities neither focal nor neighboring resource density nor frequency influenced damage. These findings illustrate the importance of consumer density in mediating indirect effects among resources, and suggest that accounting for consumer density may improve our ability to predict population-level outcomes of neighborhood effects and our use of them in applications such as mixed-crop pest management. © 2017 by the Ecological Society of America.

  12. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...

  13. Predicting carnivore occurrence with noninvasive surveys and occupancy modeling

    Science.gov (United States)

    Long, Robert A.; Donovan, Therese M.; MacKay, Paula; Zielinski, William J.; Buzas, Jeffrey S.

    2011-01-01

    Terrestrial carnivores typically have large home ranges and exist at low population densities, thus presenting challenges to wildlife researchers. We employed multiple, noninvasive survey methods—scat detection dogs, remote cameras, and hair snares—to collect detection–nondetection data for elusive American black bears (Ursus americanus), fishers (Martes pennanti), and bobcats (Lynx rufus) throughout the rugged Vermont landscape. We analyzed these data using occupancy modeling that explicitly incorporated detectability as well as habitat and landscape variables. For black bears, percentage of forested land within 5 km of survey sites was an important positive predictor of occupancy, and percentage of human developed land within 5 km was a negative predictor. Although the relationship was less clear for bobcats, occupancy appeared positively related to the percentage of both mixed forest and forested wetland habitat within 1 km of survey sites. The relationship between specific covariates and fisher occupancy was unclear, with no specific habitat or landscape variables directly related to occupancy. For all species, we used model averaging to predict occurrence across the study area. Receiver operating characteristic (ROC) analyses of our black bear and fisher models suggested that occupancy modeling efforts with data from noninvasive surveys could be useful for carnivore conservation and management, as they provide insights into habitat use at the regional and landscape scale without requiring capture or direct observation of study species.

  14. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  15. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  16. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  17. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  18. Density functional theory prediction of pKa for carboxylated single-wall carbon nanotubes and graphene

    Science.gov (United States)

    Li, Hao; Fu, Aiping; Xue, Xuyan; Guo, Fengna; Huai, Wenbo; Chu, Tianshu; Wang, Zonghua

    2017-06-01

    Density functional calculations have been performed to investigate the acidities for the carboxylated single-wall carbon nanotubes and graphene. The pKa values for different COOH-functionalized models with varying lengths, diameters and chirality of nanotubes and with different edges of graphene were predicted using the SMD/M05-2X/6-31G* method combined with two universal thermodynamic cycles. The effects of following factors, such as, the functionalized position of carboxyl group, the Stone-Wales and single vacancy defects, on the acidity of the functionalized nanotube and graphene have also been evaluated. The deprotonated species have undergone decarboxylation when the hybridization mode of the carbon atom at the functionalization site changed from sp2 to sp3 both for the tube and graphene. The knowledge of the pKa values of the carboxylated nanotube and graphene could be of great help for the understanding of the nanocarbon materials in many diverse areas, including environmental protection, catalysis, electrochemistry and biochemistry.

  19. Idea density measured in late life predicts subsequent cognitive trajectories: implications for the measurement of cognitive reserve.

    Science.gov (United States)

    Farias, Sarah Tomaszewski; Chand, Vineeta; Bonnici, Lisa; Baynes, Kathleen; Harvey, Danielle; Mungas, Dan; Simon, Christa; Reed, Bruce

    2012-11-01

    The Nun Study showed that lower linguistic ability in young adulthood, measured by idea density (ID), increased the risk of dementia in late life. The present study examined whether ID measured in late life continues to predict the trajectory of cognitive change. ID was measured in 81 older adults who were followed longitudinally for an average of 4.3 years. Changes in global cognition and 4 specific neuropsychological domains (episodic memory, semantic memory, spatial abilities, and executive function) were examined as outcomes. Separate random effects models tested the effect of ID on longitudinal change in outcomes, adjusted for age and education. Lower ID was associated with greater subsequent decline in global cognition, semantic memory, episodic memory, and spatial abilities. When analysis was restricted to only participants without dementia at the time ID was collected, results were similar. Linguistic ability in young adulthood, as measured by ID, has been previously proposed as an index of neurocognitive development and/or cognitive reserve. The present study provides evidence that even when ID is measured in old age, it continues to be associated with subsequent cognitive decline and as such may continue to provide a marker of cognitive reserve.

  20. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  1. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  2. Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor

    Directory of Open Access Journals (Sweden)

    Amos O. Anele

    2018-04-01

    Full Text Available In a previous paper, a number of potential models for short-term water demand (STWD prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017 showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs. MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP, proposed by Todini (2008, to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA, feed-forward back propagation neural network (FFBP-NN and hybrid model (i.e., combined forecast from ARMA and FFBP-NN. The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision.

  3. Color-flavor locked strange quark matter in a mass density-dependent model

    International Nuclear Information System (INIS)

    Chen Yuede; Wen Xinjian

    2007-01-01

    Properties of color-flavor locked (CFL) strange quark matter have been studied in a mass-density-dependent model, and compared with the results in the conventional bag model. In both models, the CFL phase is more stable than the normal nuclear matter for reasonable parameters. However, the lower density behavior of the sound velocity in this model is completely opposite to that in the bag model, which makes the maximum mass of CFL quark stars in the mass-density-dependent model larger than that in the bag model. (authors)

  4. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  5. Knowledge-based artificial neural network model to predict the properties of alpha+ beta titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Banu, P. S. Noori; Rani, S. Devaki [Dept. of Metallurgical Engineering, Jawaharlal Nehru Technological University, HyderabadI (India)

    2016-08-15

    In view of emerging applications of alpha+beta titanium alloys in aerospace and defense, we have aimed to develop a Back propagation neural network (BPNN) model capable of predicting the properties of these alloys as functions of alloy composition and/or thermomechanical processing parameters. The optimized BPNN model architecture was based on the sigmoid transfer function and has one hidden layer with ten nodes. The BPNN model showed excellent predictability of five properties: Tensile strength (r: 0.96), yield strength (r: 0.93), beta transus (r: 0.96), specific heat capacity (r: 1.00) and density (r: 0.99). The developed BPNN model was in agreement with the experimental data in demonstrating the individual effects of alloying elements in modulating the above properties. This model can serve as the platform for the design and development of new alpha+beta titanium alloys in order to attain desired strength, density and specific heat capacity.

  6. Serum bone alkaline phosphatase and calcaneus bone density predict fractures: a prospective study.

    Science.gov (United States)

    Ross, P D; Kress, B C; Parson, R E; Wasnich, R D; Armour, K A; Mizrahi, I A

    2000-01-01

    The aim of this study was to assess the ability of serum bone-specific alkaline phosphatase (bone ALP), creatinine-corrected urinary collagen crosslinks (CTx) and calcaneus bone mineral density (BMD) to identify postmenopausal women who have an increased risk of osteoporotic fractures. Calcaneus BMD and biochemical markers of bone turnover (serum bone ALP and urinary CTx) were measured in 512 community-dwelling postmenopausal women (mean age at baseline 69 years) participating in the Hawaii Osteoporosis Study. New spine and nonspine fractures subsequent to the BMD and biochemical bone markers measurements were recorded over an average of 2.7 years. Lateral spinal radiographs were used to identify spine fractures. Nonspine fractures were identified by self-report at the time of each examination. During the 2.7-year follow-up, at least one osteoporotic fracture occurred in 55 (10.7%) of the 512 women. Mean baseline serum bone ALP and urinary CTx were significantly higher among women who experienced an osteoporotic fracture compared with those women who did not fracture. In separate age-adjusted logistic regression models, serum bone ALP, urinary CTx and calcaneus BMD were each significantly associated with new fractures (odds ratios of 1.53, 1.54 and 1.61 per SD, respectively). Multiple variable logistic regression analysis identified BMD and serum bone ALP as significant predictors of fracture (p = 0.002 and 0.017, respectively). The results from this investigation indicate that increased bone turnover is significantly associated with an increased risk of osteoporotic fracture in postmenopausal women. This association is similar in magnitude and independent of that observed for BMD.

  7. Role of bone mineral density in predicting morphometric vertebral fractures in patients with HIV infection.

    Science.gov (United States)

    Porcelli, T; Gotti, D; Cristiano, A; Maffezzoni, F; Mazziotti, G; Focà, E; Castelli, F; Giustina, A; Quiros-Roldan, E

    2014-09-01

    This study investigated the bone of HIV patients both in terms of quantity and quality. It was found that HIV-infected patients did fracture independently of the degree of bone demineralization as in other forms of secondary osteoporosis. We aimed to determine the prevalence of vertebral fractures (VFs) in HIV patients who were screened by bone mineral density (BMD) and to explore possible factors associated with VFs. This is a cross-sectional study that included HIV-infected patients recruited in the Clinic of Infectious and Tropical Diseases and that underwent BMD measurement by dual-energy X-ray absorptiometry (DXA) at the lumbar spine and hip (Lunar Prodigy, GE Healthcare). For the assessment of VFs, anteroposterior and lateral X-ray examinations of the thoracic and lumbar spines were performed and were centrally digitized. Logistic regression models were used in the statistical analysis of factors associated with VFs. One hundred thirty-one consecutive patients with HIV infection (93 M, 38 F, median age 51 years; range, 36-75) underwent BMD measurement: 25.2 % of patients showed normal BMD, while 45 % were osteopenic and 29.7 % osteoporotic. Prevalence of low BMD (osteopenia and osteoporosis) was higher in females as compared to males (90 vs 69 %) with no significant correlation with age and body mass index. VFs occurred more frequently in patients with low BMD as compared to patients with normal BMD (88.5 vs. 11.4 %; p osteoporosis (43 vs. 46 %; p = 0.073). VFs were significantly associated with older age and previous AIDS events. These results suggest a BMD patients at risk of skeletal fragility and, therefore, good candidates for morphometric evaluation of spine X-ray in line with other forms of secondary osteoporosis with impaired bone quality.

  8. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  9. Densities and isothermal compressibilities of ionic liquids - Modelling and application

    DEFF Research Database (Denmark)

    Abildskov, Jens; Ellegaard, Martin Dela; O’Connell, J.P.

    2010-01-01

    Two corresponding-states forms have been developed for direct correlation function integrals in liquids to represent pressure effects on the volume of ionic liquids over wide ranges of temperature and pressure. The correlations can be analytically integrated from a chosen reference density to pro...

  10. Age as a predictive factor of mammographic breast density in Jamaican women

    International Nuclear Information System (INIS)

    Soares, Deanne; Reid, Marvin; James, Michael

    2002-01-01

    AIM: We sought to determine the relationship between age, and other clinical characteristics such as parity, oestrogen use, dietary factors and menstrual history on breast density in Jamaican women. METHODS AND MATERIALS: A retrospective study was done of 891 patients who attended the breast imaging unit. The clinical characteristics were extracted from the patient records. Mammograms were assessed independently by two radiologists who were blinded to the patient clinical characteristics. Breast densities were assigned using the American College of Radiology (ACR) classification. RESULTS: The concordance between the ACR classification of breast density between the two independent radiologists was 92% with k = 0.76 (SE = 0.02, P -2 vs 26.0 ± 5.2 kg m -2 , P < 0.0001). Mammographic breast density decreased with age. The age adjusted odds ratios (ORs) for predictors significantly related to high breast density were parity, OR = 0.79 (95%CI:0.71, 0.88), weight, OR = 0.92 (95% CI:0.91, 0.95), BMI, OR = 0.83 (95% CI:0.78, 0.89), menopause, OR = 0.51 (95% CI:0.36, 0.74) and a history of previous breast surgery, OR 1.6 (95% CI:1.1, 2.3). CONCLUSION: The rate decline of breast density with age in our population was influenced by parity and body composition. Soares, D. et al. (2002)

  11. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  12. Measurements and IRI Model Predictions During the Recent Solar Minimum

    Science.gov (United States)

    Bilitza, Dieter; Brown, Steven A.; Wang, Mathew Y.; Souza, Jonas R.; Roddy, Patrick A.

    2012-01-01

    Cycle 23 was exceptional in that it lasted almost two years longer than its predecessors and in that it ended in an extended minimum period that proved all predictions wrong. Comparisons of the International Reference Ionosphere (IRI) with CHAMP and GRACE in-situ measurements of electron density during the minimum have revealed significant discrepancies at 400-500 km altitude. Our study investigates the causes for these discrepancies with the help of ionosonde and Planar Langmuir Probe (PLP) data from the Communications/Navigation Outage Forecasting System (C/NOFS) satellite. Our C/NOFS comparisons confirm the earlier CHAMP and GRACE results. But the ionosonde measurements of the F-peak plasma frequency (foF2) show generally good agreement throughout the whole solar cycle. At mid-latitude stations yearly averages of the data-model difference are within 10% and at low latitudes stations within 20%. The 60-70% differences found at 400-500 km altitude are not seen at the F peak. We will discuss how these seemingly contradicting results from the ionosonde and in situ data-model comparisons can be explained and which parameters need to be corrected in the IRI model.

  13. Models for Strength Prediction of High-Porosity Cast-In-Situ Foamed Concrete

    Directory of Open Access Journals (Sweden)

    Wenhui Zhao

    2018-01-01

    Full Text Available A study was undertaken to develop a prediction model of compressive strength for three types of high-porosity cast-in-situ foamed concrete (cement mix, cement-fly ash mix, and cement-sand mix with dry densities of less than 700 kg/m3. The model is an extension of Balshin’s model and takes into account the hydration ratio of the raw materials, in which the water/cement ratio was a constant for the entire construction period for a certain casting density. The results show that the measured porosity is slightly lower than the theoretical porosity due to few inaccessible pores. The compressive strength increases exponentially with the increase in the ratio of the dry density to the solid density and increases with the curing time following the composite function A2ln⁡tB2 for all three types of foamed concrete. Based on the results that the compressive strength changes with the porosity and the curing time, a prediction model taking into account the mix constitution, curing time, and porosity is developed. A simple prediction model is put forward when no experimental data are available.

  14. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae) habitat and population densities.

    Science.gov (United States)

    Al-Kindi, Khalifa M; Kwan, Paul; R Andrew, Nigel; Welch, Mitchell

    2017-01-01

    In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus . An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  15. Remote sensing and spatial statistical techniques for modelling Ommatissus lybicus (Hemiptera: Tropiduchidae habitat and population densities

    Directory of Open Access Journals (Sweden)

    Khalifa M. Al-Kindi

    2017-08-01

    Full Text Available In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.

  16. Measurement and modelling of high pressure density and interfacial tension of (gas + n-alkane) binary mixtures

    International Nuclear Information System (INIS)

    Pereira, Luís M.C.; Chapoy, Antonin; Burgass, Rod; Tohidi, Bahman

    2016-01-01

    Highlights: • (Density + IFT) measurements are performed in synthetic reservoir fluids. • Measured systems include CO_2, CH_4 and N_2 with n-decane. • Novel data are reported for temperatures up to 443 K and pressures up to 69 MPa. • Predictive models are tested in 16 (gas + n-alkane) systems. • Best modelling results are achieved with the Density Gradient Theory. - Abstract: The deployment of more efficient and economical extraction methods and processing facilities of oil and gas requires the accurate knowledge of the interfacial tension (IFT) of fluid phases in contact. In this work, the capillary constant a of binary mixtures containing n-decane and common gases such as carbon dioxide, methane and nitrogen was measured. Experimental measurements were carried at four temperatures (313, 343, 393 and 442 K) and pressures up to 69 MPa, or near the complete vaporisation of the organic phase into the gas-rich phase. To determine accurate IFT values, the capillary constants were combined with saturated phase density data measured with an Anton Paar densitometer and correlated with a model based on the Peng–Robinson 1978 equation of state (PR78 EoS). Correlated density showed an overall percentage absolute deviation (%AAD) to measured data of (0.2 to 0.5)% for the liquid phase and (1.5 to 2.5)% for the vapour phase of the studied systems and P–T conditions. The predictive capability of models to accurately describe both the temperature and pressure dependence of the saturated phase density and IFT of 16 (gas + n-alkane) binary mixtures was assessed in this work by comparison with data gathered from the literature and measured in this work. The IFT models considered include the Parachor, the Linear Gradient Theory (LGT) and the Density Gradient Theory (DGT) approaches combined with the Volume-Translated Predictive Peng–Robinson 1978 EoS (VT-PPR78 EoS). With no adjustable parameters, the VT-PPR78 EoS allowed a good description of both solubility and

  17. A dynamo theory prediction for solar cycle 22: Sunspot number, radio flux, exospheric temperature, and total density at 400 km

    Science.gov (United States)

    Schatten, K. H.; Hedin, A. E.

    1986-01-01

    Using the dynamo theory method to predict solar activity, a value for the smoothed sunspot number of 109 + or - 20 is obtained for solar cycle 22. The predicted cycle is expected to peak near December, 1990 + or - 1 year. Concommitantly, F(10.7) radio flux is expected to reach a smoothed value of 158 + or - 18 flux units. Global mean exospheric temperature is expected to reach 1060 + or - 50 K and global total average total thermospheric density at 400 km is expected to reach 4.3 x 10 to the -15th gm/cu cm + or - 25 percent.

  18. A dynamo theory prediction for solar cycle 22 - Sunspot number, radio flux, exospheric temperature, and total density at 400 km

    Science.gov (United States)

    Schatten, K. H.; Hedin, A. E.

    1984-01-01

    Using the 'dynamo theory' method to predict solar activity, a value for the smoothed sunspot number of 109 + or - 20 is obtained for solar cycle 22. The predicted cycle is expected to peak near December, 1990 + or - 1 year. Concommitantly, F(10.7) radio flux is expected to reach a smoothed value of 158 + or - 18 flux units. Global mean exospheric temperature is expected to reach 1060 + or - 50 K and global total average total thermospheric density at 400 km is expected to reach 4.3 x 10 to the -15th gm/cu cm + or - 25 percent.

  19. Population Density Modeling for Diverse Land Use Classes: Creating a National Dasymetric Worker Population Model

    Science.gov (United States)

    Trombley, N.; Weber, E.; Moehl, J.

    2017-12-01

    Many studies invoke dasymetric mapping to make more accurate depictions of population distribution by spatially restricting populations to inhabited/inhabitable portions of observational units (e.g., census blocks) and/or by varying population density among different land classes. LandScan USA uses this approach by restricting particular population components (such as residents or workers) to building area detected from remotely sensed imagery, but also goes a step further by classifying each cell of building area in accordance with ancillary land use information from national parcel data (CoreLogic, Inc.'s ParcelPoint database). Modeling population density according to land use is critical. For instance, office buildings would have a higher density of workers than warehouses even though the latter would likely have more cells of detection. This paper presents a modeling approach by which different land uses are assigned different densities to more accurately distribute populations within them. For parts of the country where the parcel data is insufficient, an alternate methodology is developed that uses National Land Cover Database (NLCD) data to define the land use type of building detection. Furthermore, LiDAR data is incorporated for many of the largest cities across the US, allowing the independent variables to be updated from two-dimensional building detection area to total building floor space. In the end, four different regression models are created to explain the effect of different land uses on worker distribution: A two-dimensional model using land use types from the parcel data A three-dimensional model using land use types from the parcel data A two-dimensional model using land use types from the NLCD data, and A three-dimensional model using land use types from the NLCD data. By and large, the resultant coefficients followed intuition, but importantly allow the relationships between different land uses to be quantified. For instance, in the model

  20. Age as a predictive factor of mammographic breast density in Jamaican women

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Deanne; Reid, Marvin; James, Michael

    2002-06-01

    AIM: We sought to determine the relationship between age, and other clinical characteristics such as parity, oestrogen use, dietary factors and menstrual history on breast density in Jamaican women. METHODS AND MATERIALS: A retrospective study was done of 891 patients who attended the breast imaging unit. The clinical characteristics were extracted from the patient records. Mammograms were assessed independently by two radiologists who were blinded to the patient clinical characteristics. Breast densities were assigned using the American College of Radiology (ACR) classification. RESULTS: The concordance between the ACR classification of breast density between the two independent radiologists was 92% with k = 0.76 (SE = 0.02, P < 0.001). Women with low breast density were heavier (81.3 {+-} 15.5 kg vs 68.4 {+-} 14.3 kg,P < 0.0001, mean {+-} standard deviation (SD)) and more obese (body mass index (BMI), 30.3 {+-} 5.8 kg m{sup -2} vs 26.0 {+-} 5.2 kg m{sup -2}, P < 0.0001). Mammographic breast density decreased with age. The age adjusted odds ratios (ORs) for predictors significantly related to high breast density were parity, OR = 0.79 (95%CI:0.71, 0.88), weight, OR = 0.92 (95% CI:0.91, 0.95), BMI, OR = 0.83 (95% CI:0.78, 0.89), menopause, OR = 0.51 (95% CI:0.36, 0.74) and a history of previous breast surgery, OR 1.6 (95% CI:1.1, 2.3). CONCLUSION: The rate decline of breast density with age in our population was influenced by parity and body composition. Soares, D. et al. (2002)

  1. Butterfly, Recurrence, and Predictability in Lorenz Models

    Science.gov (United States)

    Shen, B. W.

    2017-12-01

    Over the span of 50 years, the original three-dimensional Lorenz model (3DLM; Lorenz,1963) and its high-dimensional versions (e.g., Shen 2014a and references therein) have been used for improving our understanding of the predictability of weather and climate with a focus on chaotic responses. Although the Lorenz studies focus on nonlinear processes and chaotic dynamics, people often apply a "linear" conceptual model to understand the nonlinear processes in the 3DLM. In this talk, we present examples to illustrate the common misunderstandings regarding butterfly effect and discuss the importance of solutions' recurrence and boundedness in the 3DLM and high-dimensional LMs. The first example is discussed with the following folklore that has been widely used as an analogy of the butterfly effect: "For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail."However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability; and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008). Lorenz's comments suggest that the verse neither describes negative (nonlinear) feedback nor indicates recurrence, the latter of which is required for the appearance of a butterfly pattern. The second example is to illustrate that the divergence of two nearby trajectories should be bounded and recurrent, as shown in Figure 1. Furthermore, we will discuss how high-dimensional LMs were derived to illustrate (1) negative nonlinear feedback that stabilizes the system within the five- and seven-dimensional LMs (5D and 7D LMs; Shen 2014a; 2015a; 2016); (2) positive nonlinear feedback that destabilizes the system within the 6D and 8D LMs (Shen 2015b; 2017); and (3

  2. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  3. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  4. Bone mineral density at the hip predicts mortality in elderly men.

    Science.gov (United States)

    Trivedi, D P; Khaw, K T

    2001-01-01

    Low bone density as assessed by calcaneal ultrasound has been associated with mortality in elderly men and women. We examined the relationship between bone density measured at the hip and all cause and cardiovascular mortality in elderly men. Men aged 65-76 years from the general community were recruited from general practices in Cambridge between 1991 and 1995. At baseline survey, data collection included health questionnaires, measures of anthropometry and cardiovascular risk factors, as well as bone mineral density (BMD) measured using dual energy X-ray absorptiometry. All men have been followed up for vital status up to December 1999. BMD was significantly inversely related to mortality from all causes and cardiovascular disease, with decreasing rates with increasing bone density quartile, and an approximate halving of risk between the bottom and top quartile (p risk (95% CI 0.66-0.91) for all-cause mortality and 0.76 relative risk (95% CI 0.62-0.93) for cardiovascular disease mortality. The association remained significant after adjusting for age, body mass index, cigarette smoking status, serum cholesterol, systolic blood pressure, past history of heart attack, stroke or cancer and other lifestyle factors which included use of alcohol, physical activity and general health status. Low bone density at the hip is thus a strong and independent predictor of all-cause and cardiovascular mortality in older men.

  5. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser

    2010-09-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  6. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser; Mukaddam, Mohsin Ahmed; Schwingenschlö gl, Udo

    2010-01-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  7. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  8. Early changes of parotid density and volume predict modifications at the end of therapy and intensity of acute xerostomia

    International Nuclear Information System (INIS)

    Belli, Maria Luisa; Broggi, Sara; Scalco, Elisa; Rizzo, Giovanna; Sanguineti, Giuseppe; Fiorino, Claudio; Cattaneo, Giovanni Mauro; Dinapoli, Nicola; Valentini, Vincenzo; Ricchetti, Francesco

    2014-01-01

    To quantitatively assess the predictive power of early variations of parotid gland volume and density on final changes at the end of therapy and, possibly, on acute xerostomia during IMRT for head-neck cancer. Data of 92 parotids (46 patients) were available. Kinetics of the changes during treatment were described by the daily rate of density (rΔρ) and volume (rΔvol) variation based on weekly diagnostic kVCT images. Correlation between early and final changes was investigated as well as the correlation with prospective toxicity data (CTCAEv3.0) collected weekly during treatment for 24/46 patients. A higher rΔρ was observed during the first compared to last week of treatment (-0,50 vs -0,05HU, p-value = 0.0001). Based on early variations, a good estimation of the final changes may be obtained (Δρ: AUC = 0.82, p = 0.0001; Δvol: AUC = 0.77, p = 0.0001). Both early rΔρ and rΔvol predict a higher ''mean'' acute xerostomia score (≥ median value, 1.57; p-value = 0.01). Median early density rate changes for patients with mean xerostomia score ≥ / 3 /day for rΔρ and rΔvol respectively. Further studies are necessary to definitively assess the potential of early density/volume changes in identifying more sensitive patients at higher risk of experiencing xerostomia. (orig.) [de

  9. Forecasting the density of oil futures returns using model-free implied volatility and high-frequency data

    International Nuclear Information System (INIS)

    Ielpo, Florian; Sevi, Benoit

    2013-09-01

    Forecasting the density of returns is useful for many purposes in finance, such as risk management activities, portfolio choice or derivative security pricing. Existing methods to forecast the density of returns either use prices of the asset of interest or option prices on this same asset. The latter method needs to convert the risk-neutral estimate of the density into a physical measure, which is computationally cumbersome. In this paper, we take the view of a practitioner who observes the implied volatility under the form of an index, namely the recent OVX, to forecast the density of oil futures returns for horizons going from 1 to 60 days. Using the recent methodology in Maheu and McCurdy (2011) to compute density predictions, we compare the performance of time series models using implied volatility and either daily or intra-daily futures prices. Our results indicate that models based on implied volatility deliver significantly better density forecasts at all horizons, which is in line with numerous studies delivering the same evidence for volatility point forecast. (authors)

  10. Goethite surface reactivity: III. Unifying arsenate adsorption behavior through a variable crystal face - Site density model

    Science.gov (United States)

    Salazar-Camacho, Carlos; Villalobos, Mario

    2010-04-01

    We developed a model that describes quantitatively the arsenate adsorption behavior for any goethite preparation as a function of pH and ionic strength, by using one basic surface arsenate stoichiometry, with two affinity constants. The model combines a face distribution-crystallographic site density model for goethite with tenets of the Triple Layer and CD-MUSIC surface complexation models, and is self-consistent with its adsorption behavior towards protons, electrolytes, and other ions investigated previously. Five different systems of published arsenate adsorption data were used to calibrate the model spanning a wide range of chemical conditions, which included adsorption isotherms at different pH values, and adsorption pH-edges at different As(V) loadings, both at different ionic strengths and background electrolytes. Four additional goethite-arsenate systems reported with limited characterization and adsorption data were accurately described by the model developed. The adsorption reaction proposed is: lbond2 FeOH +lbond2 SOH +AsO43-+H→lbond2 FeOAsO3[2-]…SOH+HO where lbond2 SOH is an adjacent surface site to lbond2 FeOH; with log K = 21.6 ± 0.7 when lbond2 SOH is another lbond2 FeOH, and log K = 18.75 ± 0.9, when lbond2 SOH is lbond2 Fe 2OH. An additional small contribution of a protonated complex was required to describe data at low pH and very high arsenate loadings. The model considered goethites above 80 m 2/g as ideally composed of 70% face (1 0 1) and 30% face (0 0 1), resulting in a site density for lbond2 FeOH and for lbond2 Fe 3OH of 3.125/nm 2 each. Below 80 m 2/g surface capacity increases progressively with decreasing area, which was modeled by considering a progressively increasing proportion of faces (0 1 0)/(1 0 1), because face (0 1 0) shows a much higher site density of lbond2 FeOH groups. Computation of the specific proportion of faces, and thus of the site densities for the three types of crystallographic surface groups present in

  11. Analytical thermal modelling of multilayered active embedded chips into high density electronic board

    Directory of Open Access Journals (Sweden)

    Monier-Vinard Eric

    2013-01-01

    Full Text Available The recent Printed Wiring Board embedding technology is an attractive packaging alternative that allows a very high degree of miniaturization by stacking multiple layers of embedded chips. This disruptive technology will further increase the thermal management challenges by concentrating heat dissipation at the heart of the organic substrate structure. In order to allow the electronic designer to early analyze the limits of the power dissipation, depending on the embedded chip location inside the board, as well as the thermal interactions with other buried chips or surface mounted electronic components, an analytical thermal modelling approach was established. The presented work describes the comparison of the analytical model results with the numerical models of various embedded chips configurations. The thermal behaviour predictions of the analytical model, found to be within ±10% of relative error, demonstrate its relevance for modelling high density electronic board. Besides the approach promotes a practical solution to study the potential gain to conduct a part of heat flow from the components towards a set of localized cooled board pads.

  12. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  13. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  14. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    International Nuclear Information System (INIS)

    Gulisashvili, Archil; Stein, Elias M.

    2010-01-01

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  15. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...

  16. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  17. A new equation of state for better liquid density prediction of natural gas systems

    Science.gov (United States)

    Nwankwo, Princess C.

    Equations of state formulations, modifications and applications have remained active research areas since the success of van der Waal's equation in 1873. The need for better reservoir fluid modeling and characterization is of great importance to petroleum engineers who deal with thermodynamic related properties of petroleum fluids at every stage of the petroleum "life span" from its drilling, to production through the wellbore, to transportation, metering and storage. Equations of state methods are far less expensive (in terms of material cost and time) than laboratory or experimental forages and the results are interestingly not too far removed from the limits of acceptable accuracy. In most cases, the degree of accuracy obtained, by using various EOS's, though not appreciable, have been acceptable when considering the gain in time. The possibility of obtaining an equation of state which though simple in form and in use, could have the potential of further narrowing the present existing bias between experimentally determined and popular EOS estimated results spurred the interest that resulted in this study. This research study had as its chief objective, to develop a new equation of state that would more efficiently capture the thermodynamic properties of gas condensate fluids, especially the liquid phase density, which is the major weakness of other established and popular cubic equations of state. The set objective was satisfied by a new semi analytical cubic three parameter equation of state, derived by the modification of the attraction term contribution to pressure of the van der Waal EOS without compromising either structural simplicity or accuracy of estimating other vapor liquid equilibria properties. The application of new EOS to single and multi-component light hydrocarbon fluids recorded far lower error values than does the popular two parameter, Peng-Robinson's (PR) and three parameter Patel-Teja's (PT) equations of state. Furthermore, this research

  18. Coherent density fluctuation model as a local-scale limit to ATDHF

    International Nuclear Information System (INIS)

    Antonov, A.N.; Petkov, I.Zh.; Stoitsov, M.V.

    1985-04-01

    The local scale transformation method is used for the construction of an Adiabatic Time-Dependent Hartree-Fock approach in terms of the local density distribution. The coherent density fluctuation relations of the model result in a particular case when the ''flucton'' local density is connected with the plane wave determinant model function be means of the local-scale coordinate transformation. The collective potential energy expression is obtained and its relation to the nuclear matter energy saturation curve is revealed. (author)

  19. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  20. A two-population sporadic meteoroid bulk density distribution and its implications for environment models

    Science.gov (United States)

    Moorhead, Althea V.; Blaauw, Rhiannon C.; Moser, Danielle E.; Campbell-Brown, Margaret D.; Brown, Peter G.; Cooke, William J.

    2017-12-01

    The bulk density of a meteoroid affects its dynamics in space, its ablation in the atmosphere, and the damage it does to spacecraft and lunar or planetary surfaces. Meteoroid bulk densities are also notoriously difficult to measure, and we are typically forced to assume a density or attempt to measure it via a proxy. In this paper, we construct a density distribution for sporadic meteoroids based on existing density measurements. We considered two possible proxies for density: the KB parameter introduced by Ceplecha and Tisserand parameter, TJ. Although KB is frequently cited as a proxy for meteoroid material properties, we find that it is poorly correlated with ablation-model-derived densities. We therefore follow the example of Kikwaya et al. in associating density with the Tisserand parameter. We fit two density distributions to meteoroids originating from Halley-type comets (TJ 2); the resulting two-population density distribution is the most detailed sporadic meteoroid density distribution justified by the available data. Finally, we discuss the implications for meteoroid environment models and spacecraft risk assessments. We find that correcting for density increases the fraction of meteoroid-induced spacecraft damage produced by the helion/antihelion source.