WorldWideScience

Sample records for model predicts density

  1. A predictive model for the tokamak density limit

    International Nuclear Information System (INIS)

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; Gates, D. A.; Swerdlow, J.; White, R. B.

    2016-01-01

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Our results are robust to a substantial variation in model parameters within the range of experiments.

  2. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    Science.gov (United States)

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  3. Predicting grizzly bear density in western North America.

    Science.gov (United States)

    Mowat, Garth; Heard, Douglas C; Schwarz, Carl J

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  4. Predicting grizzly bear density in western North America.

    Directory of Open Access Journals (Sweden)

    Garth Mowat

    Full Text Available Conservation of grizzly bears (Ursus arctos is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  5. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  6. The electron density and temperature distributions predicted by bow shock models of Herbig-Haro objects

    International Nuclear Information System (INIS)

    Noriega-Crespo, A.; Bohm, K.H.; Raga, A.C.

    1990-01-01

    The observable spatial electron density and temperature distributions for series of simple bow shock models, which are of special interest in the study of Herbig-Haro (H-H) objects are computed. The spatial electron density and temperature distributions are derived from forbidden line ratios. It should be possible to use these results to recognize whether an observed electron density or temperature distribution can be attributed to a bow shock, as is the case in some Herbig-Haro objects. As an example, the empirical and predicted distributions for H-H 1 are compared. The predicted electron temperature distributions give the correct temperature range and they show very good diagnostic possibilities if the forbidden O III (4959 + 5007)/4363 wavelength ratio is used. 44 refs

  7. Comparison of several measure-correlate-predict models using support vector regression techniques to estimate wind power densities. A case study

    International Nuclear Information System (INIS)

    Díaz, Santiago; Carta, José A.; Matías, José M.

    2017-01-01

    Highlights: • Eight measure-correlate-predict (MCP) models used to estimate the wind power densities (WPDs) at a target site are compared. • Support vector regressions are used as the main prediction techniques in the proposed MCPs. • The most precise MCP uses two sub-models which predict wind speed and air density in an unlinked manner. • The most precise model allows to construct a bivariable (wind speed and air density) WPD probability density function. • MCP models trained to minimise wind speed prediction error do not minimise WPD prediction error. - Abstract: The long-term annual mean wind power density (WPD) is an important indicator of wind as a power source which is usually included in regional wind resource maps as useful prior information to identify potentially attractive sites for the installation of wind projects. In this paper, a comparison is made of eight proposed Measure-Correlate-Predict (MCP) models to estimate the WPDs at a target site. Seven of these models use the Support Vector Regression (SVR) and the eighth the Multiple Linear Regression (MLR) technique, which serves as a basis to compare the performance of the other models. In addition, a wrapper technique with 10-fold cross-validation has been used to select the optimal set of input features for the SVR and MLR models. Some of the eight models were trained to directly estimate the mean hourly WPDs at a target site. Others, however, were firstly trained to estimate the parameters on which the WPD depends (i.e. wind speed and air density) and then, using these parameters, the target site mean hourly WPDs. The explanatory features considered are different combinations of the mean hourly wind speeds, wind directions and air densities recorded in 2014 at ten weather stations in the Canary Archipelago (Spain). The conclusions that can be drawn from the study undertaken include the argument that the most accurate method for the long-term estimation of WPDs requires the execution of a

  8. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging.

    Science.gov (United States)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan Conrad

    2015-11-01

    We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: http://live.ece.utexas.edu/research/fog/index.html.

  9. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    Science.gov (United States)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  10. Towards predicting wading bird densities from predicted prey densities in a post-barrage Severn estuary

    International Nuclear Information System (INIS)

    Goss-Custard, J.D.; McGrorty, S.; Clarke, R.T.; Pearson, B.; Rispin, W.E.; Durell, S.E.A. le V. dit; Rose, R.J.; Warwick, R.M.; Kirby, R.

    1991-01-01

    A winter survey of seven species of wading birds in six estuaries in south-west England was made to develop a method for predicting bird densities should a tidal power barrage be built on the Severn estuary. Within most estuaries, bird densities correlated with the densities of widely taken prey species. A barrage would substantially reduce the area of intertidal flats available at low water for the birds to feed but the invertebrate density could increase in the generally more benign post-barrage environmental conditions. Wader densities would have to increase approximately twofold to allow the same overall numbers of birds to remain post-barrage as occur on the Severn at present. Provisional estimates are given of the increases in prey density required to allow bird densities to increase by this amount. With the exception of the prey of dunlin, these fall well within the ranges of densities found in other estuaries, and so could in principle be attained in the post-barrage Severn. An attempt was made to derive equations with which to predict post-barrage densities of invertebrates from easily measured, static environmental variables. The fact that a site was in the Severn had a significant additional effect on invertebrate density in seven cases. This suggests that there is a special feature of the Severn, probably one associated with its highly dynamic nature. This factor must be identified if the post-barrage densities of invertebrates are to be successful predicted. (author)

  11. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    Science.gov (United States)

    Liu, R.; Lühr, H.; Doornbos, E.; Ma, S.-Y.

    2010-09-01

    With the help of four years (2002-2005) of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmineffect in order to obtain good results for magnetic storms of all activity levels. The memory effect of the thermosphere is accounted for by a weighted integration of Em over the past 3 h. In addition, a lag time of the mass density response to solar wind input of 0 to 4.5 h depending on latitude and local time is considered. A linear model using the preconditioned color: #000;">Em as main controlling parameter for predicting mass density changes during magnetic storms is developed: ρ=0.5 color: #000;">Em + ρamb, where ρamb is based on the mean density during the quiet day before the storm. We show that this simple relation predicts all storm-induced mass density variations at CHAMP altitude fairly well especially if orbital averages are considered.

  12. A coupled diffusion-fluid pressure model to predict cell density distribution for cells encapsulated in a porous hydrogel scaffold under mechanical loading.

    Science.gov (United States)

    Zhao, Feihu; Vaughan, Ted J; Mc Garrigle, Myles J; McNamara, Laoise M

    2017-10-01

    Tissue formation within tissue engineering (TE) scaffolds is preceded by growth of the cells throughout the scaffold volume and attachment of cells to the scaffold substrate. It is known that mechanical stimulation, in the form of fluid perfusion or mechanical strain, enhances cell differentiation and overall tissue formation. However, due to the complex multi-physics environment of cells within TE scaffolds, cell transport under mechanical stimulation is not fully understood. Therefore, in this study, we have developed a coupled multiphysics model to predict cell density distribution in a TE scaffold. In this model, cell transport is modelled as a thermal conduction process, which is driven by the pore fluid pressure under applied loading. As a case study, the model is investigated to predict the cell density patterns of pre-osteoblasts MC3T3-e1 cells under a range of different loading regimes, to obtain an understanding of desirable mechanical stimulation that will enhance cell density distribution within TE scaffolds. The results of this study have demonstrated that fluid perfusion can result in a higher cell density in the scaffold region closed to the outlet, while cell density distribution under mechanical compression was similar with static condition. More importantly, the study provides a novel computational approach to predict cell distribution in TE scaffolds under mechanical loading. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Using Clinical Factors and Mammographic Breast Density to Estimate Breast Cancer Risk: Development and Validation of a New Predictive Model

    Science.gov (United States)

    Tice, Jeffrey A.; Cummings, Steven R.; Smith-Bindman, Rebecca; Ichikawa, Laura; Barlow, William E.; Kerlikowske, Karla

    2009-01-01

    Background Current models for assessing breast cancer risk are complex and do not include breast density, a strong risk factor for breast cancer that is routinely reported with mammography. Objective To develop and validate an easy-to-use breast cancer risk prediction model that includes breast density. Design Empirical model based on Surveillance, Epidemiology, and End Results incidence, and relative hazards from a prospective cohort. Setting Screening mammography sites participating in the Breast Cancer Surveillance Consortium. Patients 1 095 484 women undergoing mammography who had no previous diagnosis of breast cancer. Measurements Self-reported age, race or ethnicity, family history of breast cancer, and history of breast biopsy. Community radiologists rated breast density by using 4 Breast Imaging Reporting and Data System categories. Results During 5.3 years of follow-up, invasive breast cancer was diagnosed in 14 766 women. The breast density model was well calibrated overall (expected–observed ratio, 1.03 [95% CI, 0.99 to 1.06]) and in racial and ethnic subgroups. It had modest discriminatory accuracy (concordance index, 0.66 [CI, 0.65 to 0.67]). Women with low-density mammograms had 5-year risks less than 1.67% unless they had a family history of breast cancer and were older than age 65 years. Limitation The model has only modest ability to discriminate between women who will develop breast cancer and those who will not. Conclusion A breast cancer prediction model that incorporates routinely reported measures of breast density can estimate 5-year risk for invasive breast cancer. Its accuracy needs to be further evaluated in independent populations before it can be recommended for clinical use. PMID:18316752

  14. Prediction of two-phase mixture density using artificial neural networks

    International Nuclear Information System (INIS)

    Lombardi, C.; Mazzola, A.

    1997-01-01

    In nuclear power plants, the density of boiling mixtures has a significant relevance due to its influence on the neutronic balance, the power distribution and the reactor dynamics. Since the determination of the two-phase mixture density on a purely analytical basis is in fact impractical in many situations of interest, heuristic relationships have been developed based on the parameters describing the two-phase system. However, the best or even a good structure for the correlation cannot be determined in advance, also considering that it is usually desired to represent the experimental data with the most compact equation. A possible alternative to empirical correlations is the use of artificial neural networks, which allow one to model complex systems without requiring the explicit formulation of the relationships existing among the variables. In this work, the neural network methodology was applied to predict the density data of two-phase mixtures up-flowing in adiabatic channels under different experimental conditions. The trained network predicts the density data with a root-mean-square error of 5.33%, being ∼ 93% of the data points predicted within 10%. When compared with those of two conventional well-proven correlations, i.e. the Zuber-Findlay and the CISE correlations, the neural network performances are significantly better. In spite of the good accuracy of the neural network predictions, the 'black-box' characteristic of the neural model does not allow an easy physical interpretation of the knowledge integrated in the network weights. Therefore, the neural network methodology has the advantage of not requiring a formal correlation structure and of giving very accurate results, but at the expense of a loss of model transparency. (author)

  15. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    Directory of Open Access Journals (Sweden)

    R. Liu

    2010-09-01

    Full Text Available With the help of four years (2002–2005 of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmin<−100 nT are chosen for a statistical study. In order to achieve a good correlation Em is preconditioned. Contrary to general opinion, Em has to be applied without saturation effect in order to obtain good results for magnetic storms of all activity levels. The memory effect of the thermosphere is accounted for by a weighted integration of Em over the past 3 h. In addition, a lag time of the mass density response to solar wind input of 0 to 4.5 h depending on latitude and local time is considered. A linear model using the preconditioned Em as main controlling parameter for predicting mass density changes during magnetic storms is developed: ρ=0.5 Em + ρamb, where ρamb is based on the mean density during the quiet day before the storm. We show that this simple relation predicts all storm-induced mass density variations at CHAMP altitude fairly well especially if orbital averages are considered.

  16. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  17. Voxel-wise prostate cell density prediction using multiparametric magnetic resonance imaging and machine learning.

    Science.gov (United States)

    Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette

    2018-04-26

    There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.

  18. Prediction of density limits in tokamaks: Theory, comparison with experiment, and application to the proposed Fusion Ignition Research Experiment

    International Nuclear Information System (INIS)

    Stacey, Weston M.

    2002-01-01

    A framework for the predictive calculation of density limits in future tokamaks is proposed. Theoretical models for different density limit phenomena are summarized, and the requirements for additional models are identified. These theoretical density limit models have been incorporated into a relatively simple, but phenomenologically comprehensive, integrated numerical calculation of the core, edge, and divertor plasmas and of the recycling neutrals, in order to obtain plasma parameters needed for the evaluation of the theoretical models. A comparison of these theoretical predictions with observed density limits in current experiments is summarized. A model for the calculation of edge pedestal parameters, which is needed in order to apply the density limit predictions to future tokamaks, is summarized. An application to predict the proximity to density limits and the edge pedestal parameters of the proposed Fusion Ignition Research Experiment is described

  19. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  20. Combining Predictive Densities using Nonlinear Filtering with Applications to US Economics Data

    NARCIS (Netherlands)

    M. Billio (Monica); R. Casarin (Roberto); F. Ravazzolo (Francesco); H.K. van Dijk (Herman)

    2011-01-01

    textabstractWe propose a multivariate combination approach to prediction based on a distributional state space representation of the weights belonging to a set of Bayesian predictive densities which have been obtained from alternative models. Several specifications of multivariate time-varying

  1. The Density Functional Theory of Flies: Predicting distributions of interacting active organisms

    Science.gov (United States)

    Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas

    On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.

  2. Predicting moisture content and density distribution of Scots pine by microwave scanning of sawn timber

    International Nuclear Information System (INIS)

    Johansson, J.; Hagman, O.; Fjellner, B.A.

    2003-01-01

    This study was carried out to investigate the possibility of calibrating a prediction model for the moisture content and density distribution of Scots pine (Pinus sylvestris) using microwave sensors. The material was initially of green moisture content and was thereafter dried in several steps to zero moisture content. At each step, all the pieces were weighed, scanned with a microwave sensor (Satimo 9,4GHz), and computed tomography (CT)-scanned with a medical CT scanner (Siemens Somatom AR.T.). The output variables from the microwave sensor were used as predictors, and CT images that correlated with known moisture content were used as response variables. Multivariate models to predict average moisture content and density were calibrated using the partial least squares (PLS) regression. The models for average moisture content and density were applied at the pixel level, and the distribution was visualized. The results show that it is possible to predict both moisture content distribution and density distribution with high accuracy using microwave sensors. (author)

  3. Predicting insect migration density and speed in the daytime convective boundary layer.

    Directory of Open Access Journals (Sweden)

    James R Bell

    Full Text Available Insect migration needs to be quantified if spatial and temporal patterns in populations are to be resolved. Yet so little ecology is understood above the flight boundary layer (i.e. >10 m where in north-west Europe an estimated 3 billion insects km(-1 month(-1 comprising pests, beneficial insects and other species that contribute to biodiversity use the atmosphere to migrate. Consequently, we elucidate meteorological mechanisms principally related to wind speed and temperature that drive variation in daytime aerial density and insect displacements speeds with increasing altitude (150-1200 m above ground level. We derived average aerial densities and displacement speeds of 1.7 million insects in the daytime convective atmospheric boundary layer using vertical-looking entomological radars. We first studied patterns of insect aerial densities and displacements speeds over a decade and linked these with average temperatures and wind velocities from a numerical weather prediction model. Generalized linear mixed models showed that average insect densities decline with increasing wind speed and increase with increasing temperatures and that the relationship between displacement speed and density was negative. We then sought to derive how general these patterns were over space using a paired site approach in which the relationship between sites was examined using simple linear regression. Both average speeds and densities were predicted remotely from a site over 100 km away, although insect densities were much noisier due to local 'spiking'. By late morning and afternoon when insects are migrating in a well-developed convective atmosphere at high altitude, they become much more difficult to predict remotely than during the early morning and at lower altitudes. Overall, our findings suggest that predicting migrating insects at altitude at distances of ≈ 100 km is promising, but additional radars are needed to parameterise spatial covariance.

  4. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  5. A cosmological model with compact space sections and low mass density

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1982-01-01

    A general relativistic cosmological model is presented, which has closed space sections and mass density below a critical density similar to that of Friedmann's models. The model may predict double images of cosmic sources. (Author) [pt

  6. Baryon density in alternative BBN models

    International Nuclear Information System (INIS)

    Kirilova, D.

    2002-10-01

    We present recent determinations of the cosmological baryon density ρ b , extracted from different kinds of observational data. The baryon density range is not very wide and is usually interpreted as an indication for consistency. It is interesting to note that all other determinations give higher baryon density than the standard big bang nucleosynthesis (BBN) model. The differences of the ρ b values from the BBN predicted one (the most precise today) may be due to the statistical and systematic errors in observations. However, they may be an indication of new physics. Hence, it is interesting to study alternative BBN models, and the possibility to resolve the discrepancies. We discuss alternative cosmological scenarios: a BBN model with decaying particles (m ∼ MeV, τ ∼ sec) and BBN with electron-sterile neutrino oscillations, which permit to relax BBN constraints on the baryon content of the Universe. (author)

  7. Crystal density predictions for nitramines based on quantum chemistry

    International Nuclear Information System (INIS)

    Qiu Ling; Xiao Heming; Gong Xuedong; Ju Xuehai; Zhu Weihua

    2007-01-01

    An efficient and convenient method for predicting the crystalline densities of energetic materials was established based on the quantum chemical computations. Density functional theory (DFT) with four different basis sets (6-31G**, 6-311G**, 6-31+G**, and 6-311++G**) and various semiempirical molecular orbital (MO) methods have been employed to predict the molecular volumes and densities of a series of energetic nitramines including acyclic, monocyclic, and polycyclic/cage molecules. The relationships between the calculated values and experimental data were discussed in detail, and linear correlations were suggested and compared at different levels. The calculation shows that if the selected basis set is larger, it will expend more CPU (central processing unit) time, larger molecular volume and smaller density will be obtained. And the densities predicted by the semiempirical MO methods are all systematically larger than the experimental data. In comparison with other methods, B3LYP/6-31G** is most accurate and economical to predict the solid-state densities of energetic nitramines. This may be instructive to the molecular designing and screening novel HEDMs

  8. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  9. Predictive densities for day-ahead electricity prices using time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre; Madsen, Henrik

    2014-01-01

    A large part of the decision-making problems actors of the power system are facing on a daily basis requires scenarios for day-ahead electricity market prices. These scenarios are most likely to be generated based on marginal predictive densities for such prices, then enhanced with a temporal...... dependence structure. A semi-parametric methodology for generating such densities is presented: it includes: (i) a time-adaptive quantile regression model for the 5%–95% quantiles; and (ii) a description of the distribution tails with exponential distributions. The forecasting skill of the proposed model...

  10. Habitat-Based Density Models for Three Cetacean Species off Southern California Illustrate Pronounced Seasonal Differences

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2017-05-01

    Full Text Available Managing marine species effectively requires spatially and temporally explicit knowledge of their density and distribution. Habitat-based density models, a type of species distribution model (SDM that uses habitat covariates to estimate species density and distribution patterns, are increasingly used for marine management and conservation because they provide a tool for assessing potential impacts (e.g., from fishery bycatch, ship strikes, anthropogenic sound over a variety of spatial and temporal scales. The abundance and distribution of many pelagic species exhibit substantial seasonal variability, highlighting the importance of predicting density specific to the season of interest. This is particularly true in dynamic regions like the California Current, where significant seasonal shifts in cetacean distribution have been documented at coarse scales. Finer scale (10 km habitat-based density models were previously developed for many cetacean species occurring in this region, but most models were limited to summer/fall. The objectives of our study were two-fold: (1 develop spatially-explicit density estimates for winter/spring to support management applications, and (2 compare model-predicted density and distribution patterns to previously developed summer/fall model results in the context of species ecology. We used a well-established Generalized Additive Modeling framework to develop cetacean SDMs based on 20 California Cooperative Oceanic Fisheries Investigations (CalCOFI shipboard surveys conducted during winter and spring between 2005 and 2015. Models were fit for short-beaked common dolphin (Delphinus delphis delphis, Dall's porpoise (Phocoenoides dalli, and humpback whale (Megaptera novaeangliae. Model performance was evaluated based on a variety of established metrics, including the percentage of explained deviance, ratios of observed to predicted density, and visual inspection of predicted and observed distributions. Final models were

  11. Density prediction and dimensionality reduction of mid-term electricity demand in China: A new semiparametric-based additive model

    International Nuclear Information System (INIS)

    Shao, Zhen; Yang, Shan-Lin; Gao, Fei

    2014-01-01

    Highlights: • A new stationary time series smoothing-based semiparametric model is established. • A novel semiparametric additive model based on piecewise smooth is proposed. • We model the uncertainty of data distribution for mid-term electricity forecasting. • We provide efficient long horizon simulation and extraction for external variables. • We provide stable and accurate density predictions for mid-term electricity demand. - Abstract: Accurate mid-term electricity demand forecasting is critical for efficient electric planning, budgeting and operating decisions. Mid-term electricity demand forecasting is notoriously complicated, since the demand is subject to a range of external drivers, such as climate change, economic development, which will exhibit monthly, seasonal, and annual complex variations. Conventional models are based on the assumption that original data is stable and normally distributed, which is generally insignificant in explaining actual demand pattern. This paper proposes a new semiparametric additive model that, in addition to considering the uncertainty of the data distribution, includes practical discussions covering the applications of the external variables. To effectively detach the multi-dimensional volatility of mid-term demand, a novel piecewise smooth method which allows reduction of the data dimensionality is developed. Besides, a semi-parametric procedure that makes use of bootstrap algorithm for density forecast and model estimation is presented. Two typical cases in China are presented to verify the effectiveness of the proposed methodology. The results suggest that both meteorological and economic variables play a critical role in mid-term electricity consumption prediction in China, while the extracted economic factor is adequate to reveal the potentially complex relationship between electricity consumption and economic fluctuation. Overall, the proposed model can be easily applied to mid-term demand forecasting, and

  12. Prediction of density limit disruptions on the J-TEXT tokamak

    International Nuclear Information System (INIS)

    Wang, S Y; Chen, Z Y; Huang, D W; Tong, R H; Yan, W; Wei, Y N; Ma, T K; Zhang, M; Zhuang, G

    2016-01-01

    Disruption mitigation is essential for the next generation of tokamaks. The prediction of plasma disruption is the key to disruption mitigation. A neural network combining eight input signals has been developed to predict the density limit disruptions on the J-TEXT tokamak. An optimized training method has been proposed which has improved the prediction performance. The network obtained has been tested on 64 disruption shots and 205 non-disruption shots. A successful alarm rate of 82.8% with a false alarm rate of 12.3% can be achieved at 4.8 ms prior to the current spike of the disruption. It indicates that more physical parameters than the current physical scaling should be considered for predicting the density limit. It was also found that the critical density for disruption can be predicted several tens of milliseconds in advance in most cases. Furthermore, if the network is used for real-time density feedback control, more than 95% of the density limit disruptions can be avoided by setting a proper threshold. (paper)

  13. Electron-Ion Dynamics with Time-Dependent Density Functional Theory: Towards Predictive Solar Cell Modeling: Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Maitra, Neepa [Hunter College City University of New York, New York, NY (United States)

    2016-07-14

    This project investigates the accuracy of currently-used functionals in time-dependent density functional theory, which is today routinely used to predict and design materials and computationally model processes in solar energy conversion. The rigorously-based electron-ion dynamics method developed here sheds light on traditional methods and overcomes challenges those methods have. The fundamental research undertaken here is important for building reliable and practical methods for materials discovery. The ultimate goal is to use these tools for the computational design of new materials for solar cell devices of high efficiency.

  14. Predicting mesh density for adaptive modelling of the global atmosphere.

    Science.gov (United States)

    Weller, Hilary

    2009-11-28

    The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.

  15. Linking density functional and mode coupling models for supercooled liquids.

    Science.gov (United States)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  16. Linking density functional and mode coupling models for supercooled liquids

    Energy Technology Data Exchange (ETDEWEB)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P. [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067 (India)

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  17. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    NARCIS (Netherlands)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-01-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a

  18. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    Science.gov (United States)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  19. Influence of mesh density, cortical thickness and material properties on human rib fracture prediction.

    Science.gov (United States)

    Li, Zuoping; Kindig, Matthew W; Subit, Damien; Kent, Richard W

    2010-11-01

    The purpose of this paper was to investigate the sensitivity of the structural responses and bone fractures of the ribs to mesh density, cortical thickness, and material properties so as to provide guidelines for the development of finite element (FE) thorax models used in impact biomechanics. Subject-specific FE models of the second, fourth, sixth and tenth ribs were developed to reproduce dynamic failure experiments. Sensitivity studies were then conducted to quantify the effects of variations in mesh density, cortical thickness, and material parameters on the model-predicted reaction force-displacement relationship, cortical strains, and bone fracture locations for all four ribs. Overall, it was demonstrated that rib FE models consisting of 2000-3000 trabecular hexahedral elements (weighted element length 2-3mm) and associated quadrilateral cortical shell elements with variable thickness more closely predicted the rib structural responses and bone fracture force-failure displacement relationships observed in the experiments (except the fracture locations), compared to models with constant cortical thickness. Further increases in mesh density increased computational cost but did not markedly improve model predictions. A ±30% change in the major material parameters of cortical bone lead to a -16.7 to 33.3% change in fracture displacement and -22.5 to +19.1% change in the fracture force. The results in this study suggest that human rib structural responses can be modeled in an accurate and computationally efficient way using (a) a coarse mesh of 2000-3000 solid elements, (b) cortical shells elements with variable thickness distribution and (c) a rate-dependent elastic-plastic material model. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  1. Using dynamic energy budget modeling to predict the influence of temperature and food density on the effect of Cu on earthworm mediated litter consumption.

    NARCIS (Netherlands)

    Hobbelen, P.H.F.; van Gestel, C.A.M.

    2007-01-01

    The aim of this study was to predict the dependence on temperature and food density of effects of Cu on the litter consumption by the earthworm Lumbricus rubellus, using a dynamic energy budget model (DEB-model). As a measure of the effects of Cu on food consumption, EC50s (soil concentrations

  2. Acoustic Velocity and Attenuation in Magnetorhelogical fluids based on an effective density fluid model

    Directory of Open Access Journals (Sweden)

    Shen Min

    2016-01-01

    Full Text Available Magnetrohelogical fluids (MRFs represent a class of smart materials whose rheological properties change in response to the magnetic field, which resulting in the drastic change of the acoustic impedance. This paper presents an acoustic propagation model that approximates a fluid-saturated porous medium as a fluid with a bulk modulus and effective density (EDFM to study the acoustic propagation in the MRF materials under magnetic field. The effective density fluid model derived from the Biot’s theory. Some minor changes to the theory had to be applied, modeling both fluid-like and solid-like state of the MRF material. The attenuation and velocity variation of the MRF are numerical calculated. The calculated results show that for the MRF material the attenuation and velocity predicted with this effective density fluid model are close agreement with the previous predictions by Biot’s theory. We demonstrate that for the MRF material acoustic prediction the effective density fluid model is an accurate alternative to full Biot’s theory and is much simpler to implement.

  3. Measured and predicted electron density at 600 km over Tucuman and Huancayo

    International Nuclear Information System (INIS)

    Ezquer, R.G.; Cabrera, M.A.; Araoz, L.; Mosert, M.; Radicella, S.M.

    2002-01-01

    The electron density at 600 Km of altitude (N 600 ) predicted by IRI are compared with the measurements for a given particular time and place (not average) obtained with the Japanese Hinotori satellite. The results showed that the best agreement among predictions and measurements were obtained near the magnetic equator. Disagreements about 50% were observed near the southern peak of the equatorial anomaly (EA), when the model uses the CCIR and URSI options to obtain the peak characteristics. (author)

  4. Transport critical current density in flux creep model

    International Nuclear Information System (INIS)

    Wang, J.; Taylor, K.N.R.; Russell, G.J.; Yue, Y.

    1992-01-01

    The magnetic flux creep model has been used to derive the temperature dependence of the critical current density in high temperature superconductors. The generally positive curvature of the J c -T diagram is predicted in terms of two interdependent dimensionless fitting parameters. In this paper, the results are compared with both SIS and SNS junction models of these granular materials, neither of which provides a satisfactory prediction of the experimental data. A hybrid model combining the flux creep and SNS mechanisms is shown to be able to account for the linear regions of the J c -T behavior which are observed in some materials

  5. MODEL OF THE TOKAMAK EDGE DENSITY PEDESTAL INCLUDING DIFFUSIVE NEUTRALS

    International Nuclear Information System (INIS)

    BURRELL, K.H.

    2003-01-01

    OAK-B135 Several previous analytic models of the tokamak edge density pedestal have been based on diffusive transport of plasma plus free-streaming of neutrals. This latter neutral model includes only the effect of ionization and neglects charge exchange. The present work models the edge density pedestal using diffusive transport for both the plasma and the neutrals. In contrast to the free-streaming model, a diffusion model for the neutrals includes the effect of both charge exchange and ionization and is valid when charge exchange is the dominant interaction. Surprisingly, the functional forms for the electron and neutral density profiles from the present calculation are identical to the results of the previous analytic models. There are some differences in the detailed definition of various parameters in the solution. For experimentally relevant cases where ionization and charge exchange rate are comparable, both models predict approximately the same width for the edge density pedestal

  6. Modelling of Resonantly Forced Density Waves in Dense Planetary Rings

    Science.gov (United States)

    Lehmann, M.; Schmidt, J.; Salo, H.

    2014-04-01

    Density wave theory, originally proposed to explain the spiral structure of galactic disks, has been applied to explain parts of the complex sub-structure in Saturn's rings, such as the wavetrains excited at the inner Lindblad resonances (ILR) of various satellites. The linear theory for the excitation and damping of density waves in Saturn's rings is fairly well developed (e.g. Goldreich & Tremaine [1979]; Shu [1984]). However, it fails to describe certain aspects of the observed waves. The non-applicability of the linear theory is already indicated by the "cusplike" shape of many of the observed wave profiles. This is a typical nonlinear feature which is also present in overstability wavetrains (Schmidt & Salo [2003]; Latter & Ogilvie [2010]). In particular, it turns out that the detailed damping mechanism, as well as the role of different nonlinear effects on the propagation of density waves remain intransparent. First attemps are being made to investigate the excitation and propagation of nonlinear density waves within a hydrodynamical formalism, which is also the natural formalism for describing linear density waves. A simple weakly nonlinear model, derived from a multiple-scale expansion of the hydrodynamic equations, is presented. This model describes the damping of "free" spiral density waves in a vertically integrated fluid disk with density dependent transport coefficients, where the effects of the hydrodynamic nonlinearities are included. The model predicts that density waves are linearly unstable in a ring region where the conditions for viscous overstability are met, which translates to a steep dependence of the shear viscosity with respect to the disk's surface density. The possibility that this dependence could lead to a growth of density waves with increasing distance from the resonance, was already mentioned in Goldreich & Tremaine [1978]. Sufficiently far away from the ILR, the surface density perturbation caused by the wave, is predicted to

  7. Prediction of Five Softwood Paper Properties from its Density using Support Vector Machine Regression Techniques

    Directory of Open Access Journals (Sweden)

    Esperanza García-Gonzalo

    2016-01-01

    Full Text Available Predicting paper properties based on a limited number of measured variables can be an important tool for the industry. Mathematical models were developed to predict mechanical and optical properties from the corresponding paper density for some softwood papers using support vector machine regression with the Radial Basis Function Kernel. A dataset of different properties of paper handsheets produced from pulps of pine (Pinus pinaster and P. sylvestris and cypress species (Cupressus lusitanica, C. sempervirens, and C. arizonica beaten at 1000, 4000, and 7000 revolutions was used. The results show that it is possible to obtain good models (with high coefficient of determination with two variables: the numerical variable density and the categorical variable species.

  8. Spatially explicit modeling of lesser prairie-chicken lek density in Texas

    Science.gov (United States)

    Timmer, Jennifer M.; Butler, M.J.; Ballard, Warren; Boal, Clint W.; Whitlaw, Heather A.

    2014-01-01

    As with many other grassland birds, lesser prairie-chickens (Tympanuchus pallidicinctus) have experienced population declines in the Southern Great Plains. Currently they are proposed for federal protection under the Endangered Species Act. In addition to a history of land-uses that have resulted in habitat loss, lesser prairie-chickens now face a new potential disturbance from energy development. We estimated lek density in the occupied lesser prairie-chicken range of Texas, USA, and modeled anthropogenic and vegetative landscape features associated with lek density. We used an aerial line-transect survey method to count lesser prairie-chicken leks in spring 2010 and 2011 and surveyed 208 randomly selected 51.84-km(2) blocks. We divided each survey block into 12.96-km(2) quadrats and summarized landscape variables within each quadrat. We then used hierarchical distance-sampling models to examine the relationship between lek density and anthropogenic and vegetative landscape features and predict how lek density may change in response to changes on the landscape, such as an increase in energy development. Our best models indicated lek density was related to percent grassland, region (i.e., the northeast or southwest region of the Texas Panhandle), total percentage of grassland and shrubland, paved road density, and active oil and gas well density. Predicted lek density peaked at 0.39leks/12.96km(2) (SE=0.09) and 2.05leks/12.96km(2) (SE=0.56) in the northeast and southwest region of the Texas Panhandle, respectively, which corresponds to approximately 88% and 44% grassland in the northeast and southwest region. Lek density increased with an increase in total percentage of grassland and shrubland and was greatest in areas with lower densities of paved roads and lower densities of active oil and gas wells. We used the 2 most competitive models to predict lek abundance and estimated 236 leks (CV=0.138, 95% CI=177-306leks) for our sampling area. Our results suggest that

  9. Predicting Intra-Urban Population Densities in Africa using SAR and Optical Remote Sensing Data

    Science.gov (United States)

    Linard, C.; Steele, J.; Forget, Y.; Lopez, J.; Shimoni, M.

    2017-12-01

    The population of Africa is predicted to double over the next 40 years, driving profound social, environmental and epidemiological changes within rapidly growing cities. Estimations of within-city variations in population density must be improved in order to take urban heterogeneities into account and better help urban research and decision making, especially for vulnerability and health assessments. Satellite remote sensing offers an effective solution for mapping settlements and monitoring urbanization at different spatial and temporal scales. In Africa, the urban landscape is covered by slums and small houses, where the heterogeneity is high and where the man-made materials are natural. Innovative methods that combine optical and SAR data are therefore necessary for improving settlement mapping and population density predictions. An automatic method was developed to estimate built-up densities using recent and archived optical and SAR data and a multi-temporal database of built-up densities was produced for 48 African cities. Geo-statistical methods were then used to study the relationships between census-derived population densities and satellite-derived built-up attributes. Best predictors were combined in a Random Forest framework in order to predict intra-urban variations in population density in any large African city. Models show significant improvement of our spatial understanding of urbanization and urban population distribution in Africa in comparison to the state of the art.

  10. Prediction of crack density and electrical resistance changes in indium tin oxide/polymer thin films under tensile loading

    KAUST Repository

    Mora Cordova, Angel

    2014-06-11

    We present unified predictions for the crack onset strain, evolution of crack density, and changes in electrical resistance in indium tin oxide/polymer thin films under tensile loading. We propose a damage mechanics model to quantify and predict such changes as an alternative to fracture mechanics formulations. Our predictions are obtained by assuming that there are no flaws at the onset of loading as opposed to the assumptions of fracture mechanics approaches. We calibrate the crack onset strain and the damage model based on experimental data reported in the literature. We predict crack density and changes in electrical resistance as a function of the damage induced in the films. We implement our model in the commercial finite element software ABAQUS using a user subroutine UMAT. We obtain fair to good agreement with experiments. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  11. Asymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different

    Directory of Open Access Journals (Sweden)

    Keisuke Yano

    2014-05-01

    Full Text Available We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and the Fisher information matrix for the target variables. We assume that the trace has a unique maximum point with respect to the parameter. We construct asymptotically constant-risk Bayesian predictive densities using a prior depending on the sample size. Further, we apply the theory to the subminimax estimator problem and the prediction based on the binary regression model.

  12. Contrasting cue-density effects in causal and prediction judgments.

    Science.gov (United States)

    Vadillo, Miguel A; Musca, Serban C; Blanco, Fernando; Matute, Helena

    2011-02-01

    Many theories of contingency learning assume (either explicitly or implicitly) that predicting whether an outcome will occur should be easier than making a causal judgment. Previous research suggests that outcome predictions would depart from normative standards less often than causal judgments, which is consistent with the idea that the latter are based on more numerous and complex processes. However, only indirect evidence exists for this view. The experiment presented here specifically addresses this issue by allowing for a fair comparison of causal judgments and outcome predictions, both collected at the same stage with identical rating scales. Cue density, a parameter known to affect judgments, is manipulated in a contingency learning paradigm. The results show that, if anything, the cue-density bias is stronger in outcome predictions than in causal judgments. These results contradict key assumptions of many influential theories of contingency learning.

  13. A local leaky-box model for the local stellar surface density-gas surface density-gas phase metallicity relation

    Science.gov (United States)

    Zhu, Guangtun Ben; Barrera-Ballesteros, Jorge K.; Heckman, Timothy M.; Zakamska, Nadia L.; Sánchez, Sebastian F.; Yan, Renbin; Brinkmann, Jonathan

    2017-07-01

    We revisit the relation between the stellar surface density, the gas surface density and the gas-phase metallicity of typical disc galaxies in the local Universe with the SDSS-IV/MaNGA survey, using the star formation rate surface density as an indicator for the gas surface density. We show that these three local parameters form a tight relationship, confirming previous works (e.g. by the PINGS and CALIFA surveys), but with a larger sample. We present a new local leaky-box model, assuming star-formation history and chemical evolution is localized except for outflowing materials. We derive closed-form solutions for the evolution of stellar surface density, gas surface density and gas-phase metallicity, and show that these parameters form a tight relation independent of initial gas density and time. We show that, with canonical values of model parameters, this predicted relation match the observed one well. In addition, we briefly describe a pathway to improving the current semi-analytic models of galaxy formation by incorporating the local leaky-box model in the cosmological context, which can potentially explain simultaneously multiple properties of Milky Way-type disc galaxies, such as the size growth and the global stellar mass-gas metallicity relation.

  14. A thermodynamic model for aqueous solutions of liquid-like density

    Energy Technology Data Exchange (ETDEWEB)

    Pitzer, K.S.

    1987-06-01

    The paper describes a model for the prediction of the thermodynamic properties of multicomponent aqueous solutions and discusses its applications. The model was initially developed for solutions near room temperature, but has been found to be applicable to aqueous systems up to 300/sup 0/C or slightly higher. A liquid-like density and relatively small compressibility are assumed. A typical application is the prediction of the equilibrium between an aqueous phase (brine) and one or more solid phases (minerals). (ACR)

  15. A mass-density model can account for the size-weight illusion

    Science.gov (United States)

    Bergmann Tiest, Wouter M.; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness

  16. A mass-density model can account for the size-weight illusion.

    Science.gov (United States)

    Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.

  17. Protein single-model quality assessment by feature-based probability density functions.

    Science.gov (United States)

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  18. Integrated predictive modelling simulations of burning plasma experiment designs

    International Nuclear Information System (INIS)

    Bateman, Glenn; Onjun, Thawatchai; Kritz, Arnold H

    2003-01-01

    Models for the height of the pedestal at the edge of H-mode plasmas (Onjun T et al 2002 Phys. Plasmas 9 5018) are used together with the Multi-Mode core transport model (Bateman G et al 1998 Phys. Plasmas 5 1793) in the BALDUR integrated predictive modelling code to predict the performance of the ITER (Aymar A et al 2002 Plasma Phys. Control. Fusion 44 519), FIRE (Meade D M et al 2001 Fusion Technol. 39 336), and IGNITOR (Coppi B et al 2001 Nucl. Fusion 41 1253) fusion reactor designs. The simulation protocol used in this paper is tested by comparing predicted temperature and density profiles against experimental data from 33 H-mode discharges in the JET (Rebut P H et al 1985 Nucl. Fusion 25 1011) and DIII-D (Luxon J L et al 1985 Fusion Technol. 8 441) tokamaks. The sensitivities of the predictions are evaluated for the burning plasma experimental designs by using variations of the pedestal temperature model that are one standard deviation above and below the standard model. Simulations of the fusion reactor designs are carried out for scans in which the plasma density and auxiliary heating power are varied

  19. Long-term orbit prediction for Tiangong-1 spacecraft using the mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Cheng, Haowen; Hu, Songjie; Duan, Jianfeng

    2015-03-01

    China is planning to complete its first space station by 2020. For the long-term management and maintenance, the orbit of the space station needs to be predicted for a long period of time. Since the space station is expected to work in a low-Earth orbit, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 20 days, the error in the a priori atmosphere model, if not properly corrected, could induce a semi-major axis error of up to a few kilometers and an overall position error of several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSISE00. The a priori reference mean density can be corrected during the orbit determination. For the long-term orbit prediction, we use sufficiently long period of observations and obtain a series of the diurnal mean densities. This series contains the recent variation of the atmosphere density and can be analyzed for various periodic components. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. Here we carry out the test with China's Tiangong-1 spacecraft at the altitude of about 340 km and we show that this method is simple and flexible. The densities predicted with this approach can serve in the long-term orbit prediction. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700 m and overall position errors better than 400 km.

  20. Droplet and bubble nucleation modeled by density gradient theory – cubic equation of state versus saft model

    Directory of Open Access Journals (Sweden)

    Hrubý Jan

    2012-04-01

    Full Text Available The study presents some preliminary results of the density gradient theory (GT combined with two different equations of state (EoS: the classical cubic equation by van der Waals and a recent approach based on the statistical associating fluid theory (SAFT, namely its perturbed-chain (PC modification. The results showed that the cubic EoS predicted for a given surface tension the density profile with a noticeable defect. Bulk densities predicted by the cubic EoS differed as much as by 100 % from the reference data. On the other hand, the PC-SAFT EoS provided accurate results for density profile and both bulk densities in the large range of temperatures. It has been shown that PC-SAFT is a promising tool for accurate modeling of nucleation using the GT. Besides the basic case of a planar phase interface, the spherical interface was analyzed to model a critical cluster occurring either for nucleation of droplets (condensation or bubbles (boiling, cavitation. However, the general solution for the spherical interface will require some more attention due to its numerical difficulty.

  1. Prediction of crack density and electrical resistance changes in indium tin oxide/polymer thin films under tensile loading

    KAUST Repository

    Mora Cordova, Angel; Khan, Kamran; El Sayed, Tamer

    2014-01-01

    We present unified predictions for the crack onset strain, evolution of crack density, and changes in electrical resistance in indium tin oxide/polymer thin films under tensile loading. We propose a damage mechanics model to quantify and predict

  2. Simplified local density model for adsorption over large pressure ranges

    International Nuclear Information System (INIS)

    Rangarajan, B.; Lira, C.T.; Subramanian, R.

    1995-01-01

    Physical adsorption of high-pressure fluids onto solids is of interest in the transportation and storage of fuel and radioactive gases; the separation and purification of lower hydrocarbons; solid-phase extractions; adsorbent regenerations using supercritical fluids; supercritical fluid chromatography; and critical point drying. A mean-field model is developed that superimposes the fluid-solid potential on a fluid equation of state to predict adsorption on a flat wall from vapor, liquid, and supercritical phases. A van der Waals-type equation of state is used to represent the fluid phase, and is simplified with a local density approximation for calculating the configurational energy of the inhomogeneous fluid. The simplified local density approximation makes the model tractable for routine calculations over wide pressure ranges. The model is capable of prediction of Type 2 and 3 subcritical isotherms for adsorption on a flat wall, and shows the characteristic cusplike behavior and crossovers seen experimentally near the fluid critical point

  3. Improved water density feedback model for pressurized water reactors

    International Nuclear Information System (INIS)

    Casadei, A.L.

    1976-01-01

    An improved water density feedback model has been developed for neutron diffusion calculations of PWR cores. This work addresses spectral effects on few-group cross sections due to water density changes, and water density predictions considering open channel and subcooled boiling effects. An homogenized spectral model was also derived using the unit assembly diffusion method for employment in a coarse mesh 3D diffusion computer program. The spectral and water density evaluation models described were incorporated in a 3D diffusion code, and neutronic calculations for a typical PWR were completed for both nominal and accident conditions. Comparison of neutronic calculations employing the open versus the closed channel model for accident conditions indicates that significant safety margin increases can be obtained if subcooled boiling and open channel effects are considered in accident calculations. This is attributed to effects on both core reactivity and power distribution, which result in increased margin to fuel degradation limits. For nominal operating conditions, negligible differences in core reactivity and power distribution exist since flow redistribution and subcooled voids are not significant at such conditions. The results serve to confirm the conservatism of currently employed closed channel feedback methods in accident analysis, and indicate that the model developed in this work can contribute to show increased safety margins for certain accidents

  4. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  5. Models for Strength Prediction of High-Porosity Cast-In-Situ Foamed Concrete

    Directory of Open Access Journals (Sweden)

    Wenhui Zhao

    2018-01-01

    Full Text Available A study was undertaken to develop a prediction model of compressive strength for three types of high-porosity cast-in-situ foamed concrete (cement mix, cement-fly ash mix, and cement-sand mix with dry densities of less than 700 kg/m3. The model is an extension of Balshin’s model and takes into account the hydration ratio of the raw materials, in which the water/cement ratio was a constant for the entire construction period for a certain casting density. The results show that the measured porosity is slightly lower than the theoretical porosity due to few inaccessible pores. The compressive strength increases exponentially with the increase in the ratio of the dry density to the solid density and increases with the curing time following the composite function A2ln⁡tB2 for all three types of foamed concrete. Based on the results that the compressive strength changes with the porosity and the curing time, a prediction model taking into account the mix constitution, curing time, and porosity is developed. A simple prediction model is put forward when no experimental data are available.

  6. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    Science.gov (United States)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  7. Classical density functional theory & simulations on a coarse-grained model of aromatic ionic liquids.

    Science.gov (United States)

    Turesson, Martin; Szparaga, Ryan; Ma, Ke; Woodward, Clifford E; Forsman, Jan

    2014-05-14

    A new classical density functional approach is developed to accurately treat a coarse-grained model of room temperature aromatic ionic liquids. Our major innovation is the introduction of charge-charge correlations, which are treated in a simple phenomenological way. We test this theory on a generic coarse-grained model for aromatic RTILs with oligomeric forms for both cations and anions, approximating 1-alkyl-3-methyl imidazoliums and BF₄⁻, respectively. We find that predictions by the new density functional theory for fluid structures at charged surfaces are very accurate, as compared with molecular dynamics simulations, across a range of surface charge densities and lengths of the alkyl chain. Predictions of interactions between charged surfaces are also presented.

  8. Phalangeal bone mineral density predicts incident fractures

    DEFF Research Database (Denmark)

    Friis-Holmberg, Teresa; Brixen, Kim; Rubin, Katrine Hass

    2012-01-01

    This prospective study investigates the use of phalangeal bone mineral density (BMD) in predicting fractures in a cohort (15,542) who underwent a BMD scan. In both women and men, a decrease in BMD was associated with an increased risk of fracture when adjusted for age and prevalent fractures...

  9. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  10. Hounsfield unit density accurately predicts ESWL success.

    Science.gov (United States)

    Magnuson, William J; Tomera, Kevin M; Lance, Raymond S

    2005-01-01

    Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p ESWL for upper tract calculi between 5-15mm.

  11. Kinetic modeling of low density lipoprotein oxidation in arterial wall and its application in atherosclerotic lesions prediction.

    Science.gov (United States)

    Karimi, Safoora; Dadvar, Mitra; Modarress, Hamid; Dabir, Bahram

    2013-01-01

    Oxidation of low-density lipoprotein (LDL) is one of the major factors in atherogenic process. Trapped oxidized LDL (Ox-LDL) in the subendothelial matrix is taken up by macrophage and leads to foam cell generation creating the first step in atherosclerosis development. Many researchers have studied LDL oxidation using in vitro cell-induced LDL oxidation model. The present study provides a kinetic model for LDL oxidation in intima layer that can be used in modeling of atherosclerotic lesions development. This is accomplished by considering lipid peroxidation kinetic in LDL through a system of elementary reactions. In comparison, characteristics of our proposed kinetic model are consistent with the results of previous experimental models from other researches. Furthermore, our proposed LDL oxidation model is added to the mass transfer equation in order to predict the LDL concentration distribution in intima layer which is usually difficult to measure experimentally. According to the results, LDL oxidation kinetic constant is an important parameter that affects LDL concentration in intima layer so that existence of antioxidants that is responsible for the reduction of initiating rates and prevention of radical formations, have increased the concentration of LDL in intima by reducing the LDL oxidation rate. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Assessment of adsorbate density models for numerical simulations of zeolite-based heat storage applications

    International Nuclear Information System (INIS)

    Lehmann, Christoph; Beckert, Steffen; Gläser, Roger; Kolditz, Olaf; Nagel, Thomas

    2017-01-01

    Highlights: • Characteristic curves fit for binderless Zeolite 13XBFK. • Detailed comparison of adsorbate density models for Dubinin’s adsorption theory. • Predicted heat storage densities robust against choice of density model. • Use of simple linear density models sufficient. - Abstract: The study of water sorption in microporous materials is of increasing interest, particularly in the context of heat storage applications. The potential-theory of micropore volume filling pioneered by Polanyi and Dubinin is a useful tool for the description of adsorption equilibria. Based on one single characteristic curve, the system can be extensively characterised in terms of isotherms, isobars, isosteres, enthalpies etc. However, the mathematical description of the adsorbate density’s temperature dependence has a significant impact especially on the estimation of the energetically relevant adsorption enthalpies. Here, we evaluate and compare different models existing in the literature and elucidate those leading to realistic predictions of adsorption enthalpies. This is an important prerequisite for accurate simulations of heat and mass transport ranging from the laboratory scale to the reactor level of the heat store.

  13. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  14. Nuclear ``pasta'' phase within density dependent hadronic models

    Science.gov (United States)

    Avancini, S. S.; Brito, L.; Marinelli, J. R.; Menezes, D. P.; de Moraes, M. M. W.; Providência, C.; Santos, A. M.

    2009-03-01

    In the present paper, we investigate the onset of the “pasta” phase with different parametrizations of the density dependent hadronic model and compare the results with one of the usual parametrizations of the nonlinear Walecka model. The influence of the scalar-isovector virtual δ meson is shown. At zero temperature, two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature, only the coexistence phases method is used. npe matter with fixed proton fractions and in β equilibrium are studied. We compare our results with restrictions imposed on the values of the density and pressure at the inner edge of the crust, obtained from observations of the Vela pulsar and recent isospin diffusion data from heavy-ion reactions, and with predictions from spinodal calculations.

  15. Nuclear 'pasta' phase within density dependent hadronic models

    International Nuclear Information System (INIS)

    Avancini, S. S.; Marinelli, J. R.; Menezes, D. P.; Moraes, M. M. W. de; Brito, L.; Providencia, C.; Santos, A. M.

    2009-01-01

    In the present paper, we investigate the onset of the 'pasta' phase with different parametrizations of the density dependent hadronic model and compare the results with one of the usual parametrizations of the nonlinear Walecka model. The influence of the scalar-isovector virtual δ meson is shown. At zero temperature, two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature, only the coexistence phases method is used. npe matter with fixed proton fractions and in β equilibrium are studied. We compare our results with restrictions imposed on the values of the density and pressure at the inner edge of the crust, obtained from observations of the Vela pulsar and recent isospin diffusion data from heavy-ion reactions, and with predictions from spinodal calculations

  16. Whole-brain grey matter density predicts balance stability irrespective of age and protects older adults from falling.

    Science.gov (United States)

    Boisgontier, Matthieu P; Cheval, Boris; van Ruitenbeek, Peter; Levin, Oron; Renaud, Olivier; Chanal, Julien; Swinnen, Stephan P

    2016-03-01

    Functional and structural imaging studies have demonstrated the involvement of the brain in balance control. Nevertheless, how decisive grey matter density and white matter microstructural organisation are in predicting balance stability, and especially when linked to the effects of ageing, remains unclear. Standing balance was tested on a platform moving at different frequencies and amplitudes in 30 young and 30 older adults, with eyes open and with eyes closed. Centre of pressure variance was used as an indicator of balance instability. The mean density of grey matter and mean white matter microstructural organisation were measured using voxel-based morphometry and diffusion tensor imaging, respectively. Mixed-effects models were built to analyse the extent to which age, grey matter density, and white matter microstructural organisation predicted balance instability. Results showed that both grey matter density and age independently predicted balance instability. These predictions were reinforced when the level of difficulty of the conditions increased. Furthermore, grey matter predicted balance instability beyond age and at least as consistently as age across conditions. In other words, for balance stability, the level of whole-brain grey matter density is at least as decisive as being young or old. Finally, brain grey matter appeared to be protective against falls in older adults as age increased the probability of losing balance in older adults with low, but not moderate or high grey matter density. No such results were observed for white matter microstructural organisation, thereby reinforcing the specificity of our grey matter findings. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Viscosity and Liquid Density of Asymmetric n-Alkane Mixtures: Measurement and Modelling

    DEFF Research Database (Denmark)

    Queimada, António J.; Marrucho, Isabel M.; Coutinho, João A.P.

    2005-01-01

    Viscosity and liquid density Measurements were performed, at atmospheric pressure. in pure and mixed n-decane. n-eicosane, n-docosane, and n-tetracosane from 293.15 K (or above the melting point) up to 343.15 K. The viscosity was determined with a rolling ball viscometer and liquid densities...... with a vibrating U-tube densimeter. Pure component results agreed, oil average, with literature values within 0.2% for liquid density and 3% for viscosity. The measured data were used to evaluate the performance of two models for their predictions: the friction theory coupled with the Peng-Robinson equation...... of state and a corresponding states model recently proposed for surface tension, viscosity, vapor pressure, and liquid densities of the series of n-alkanes. Advantages and shortcoming of these models are discussed....

  18. A kinetic approach to modeling the manufacture of high density strucutral foam: Foaming and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Mondy, Lisa Ann [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Noble, David R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Brunini, Victor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Roberts, Christine Cardinal [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Long, Kevin Nicholas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Soehnel, Melissa Marie [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Celina, Mathias C. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Wyatt, Nicholas B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Thompson, Kyle R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Tinsley, James

    2015-09-01

    We are studying PMDI polyurethane with a fast catalyst, such that filling and polymerization occur simultaneously. The foam is over-packed to tw ice or more of its free rise density to reach the density of interest. Our approach is to co mbine model development closely with experiments to discover new physics, to parameterize models and to validate the models once they have been developed. The model must be able to repres ent the expansion, filling, curing, and final foam properties. PMDI is chemically blown foam, wh ere carbon dioxide is pr oduced via the reaction of water and isocyanate. The isocyanate also re acts with polyol in a competing reaction, which produces the polymer. A new kinetic model is developed and implemented, which follows a simplified mathematical formalism that decouple s these two reactions. The model predicts the polymerization reaction via condensation chemis try, where vitrification and glass transition temperature evolution must be included to correctly predict this quantity. The foam gas generation kinetics are determined by tracking the molar concentration of both water and carbon dioxide. Understanding the therma l history and loads on the foam due to exothermicity and oven heating is very important to the results, since the kinetics and ma terial properties are all very sensitive to temperature. The conservation eq uations, including the e quations of motion, an energy balance, and thr ee rate equations are solved via a stabilized finite element method. We assume generalized-Newtonian rheology that is dependent on the cure, gas fraction, and temperature. The conservation equations are comb ined with a level set method to determine the location of the free surface over time. Results from the model are compared to experimental flow visualization data and post-te st CT data for the density. Seve ral geometries are investigated including a mock encapsulation part, two configur ations of a mock stru ctural part, and a bar geometry to

  19. Modelling interactions of toxicants and density dependence in wildlife populations

    Science.gov (United States)

    Schipper, Aafke M.; Hendriks, Harrie W.M.; Kauffman, Matthew J.; Hendriks, A. Jan; Huijbregts, Mark A.J.

    2013-01-01

    1. A major challenge in the conservation of threatened and endangered species is to predict population decline and design appropriate recovery measures. However, anthropogenic impacts on wildlife populations are notoriously difficult to predict due to potentially nonlinear responses and interactions with natural ecological processes like density dependence. 2. Here, we incorporated both density dependence and anthropogenic stressors in a stage-based matrix population model and parameterized it for a density-dependent population of peregrine falcons Falco peregrinus exposed to two anthropogenic toxicants [dichlorodiphenyldichloroethylene (DDE) and polybrominated diphenyl ethers (PBDEs)]. Log-logistic exposure–response relationships were used to translate toxicant concentrations in peregrine falcon eggs to effects on fecundity. Density dependence was modelled as the probability of a nonbreeding bird acquiring a breeding territory as a function of the current number of breeders. 3. The equilibrium size of the population, as represented by the number of breeders, responded nonlinearly to increasing toxicant concentrations, showing a gradual decrease followed by a relatively steep decline. Initially, toxicant-induced reductions in population size were mitigated by an alleviation of the density limitation, that is, an increasing probability of territory acquisition. Once population density was no longer limiting, the toxicant impacts were no longer buffered by an increasing proportion of nonbreeders shifting to the breeding stage, resulting in a strong decrease in the equilibrium number of breeders. 4. Median critical exposure concentrations, that is, median toxicant concentrations in eggs corresponding with an equilibrium population size of zero, were 33 and 46 μg g−1 fresh weight for DDE and PBDEs, respectively. 5. Synthesis and applications. Our modelling results showed that particular life stages of a density-limited population may be relatively insensitive to

  20. Ensemble Assimilation Using Three First-Principles Thermospheric Models as a Tool for 72-hour Density and Satellite Drag Forecasts

    Science.gov (United States)

    Hunton, D.; Pilinski, M.; Crowley, G.; Azeem, I.; Fuller-Rowell, T. J.; Matsuo, T.; Fedrizzi, M.; Solomon, S. C.; Qian, L.; Thayer, J. P.; Codrescu, M.

    2014-12-01

    Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by variability in the density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. As the population of satellites in Earth orbit grows, higher space-weather prediction accuracy is required for critical missions, such as accurate catalog maintenance, collision avoidance for manned and unmanned space flight, reentry prediction, satellite lifetime prediction, defining on-board fuel requirements, and satellite attitude dynamics. We describe ongoing work to build a comprehensive nowcast and forecast system for neutral density, winds, temperature, composition, and satellite drag. This modeling tool will be called the Atmospheric Density Assimilation Model (ADAM). It will be based on three state-of-the-art coupled models of the thermosphere-ionosphere running in real-time, using assimilative techniques to produce a thermospheric nowcast. It will also produce, in realtime, 72-hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition. We will review the requirements for the ADAM system, the underlying full-physics models, the plethora of input options available to drive the models, a feasibility study showing the performance of first-principles models as it pertains to satellite-drag operational needs, and review challenges in designing an assimilative space-weather prediction model. The performance of the ensemble assimilative model is expected to exceed the performance of current empirical and assimilative density models.

  1. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values.

    Science.gov (United States)

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were generated. Based on the consensus of the two observers, 15.6% of sites were of low bone density, 47.9% were of intermediate density, and 36.5% were of high density. Receiver-operating characteristic analysis showed that CBCT intensity values had a high predictive power for predicting high density sites (area under the curve [AUC] =0.94, P < 0.005) and intermediate density sites (AUC = 0.81, P < 0.005). The best cut-off value for intensity to predict intermediate density sites was 218 (sensitivity = 0.77 and specificity = 0.76) and the best cut-off value for intensity to predict high density sites was 403 (sensitivity = 0.93 and specificity = 0.77). CBCT intensity values are considered useful for predicting bone density at posterior mandibular implant sites.

  2. Using Apparent Density of Paper from Hardwood Kraft Pulps to Predict Sheet Properties, based on Unsupervised Classification and Multivariable Regression Techniques

    Directory of Open Access Journals (Sweden)

    Ofélia Anjos

    2015-07-01

    Full Text Available Paper properties determine the product application potential and depend on the raw material, pulping conditions, and pulp refining. The aim of this study was to construct mathematical models that predict quantitative relations between the paper density and various mechanical and optical properties of the paper. A dataset of properties of paper handsheets produced with pulps of Acacia dealbata, Acacia melanoxylon, and Eucalyptus globulus beaten at 500, 2500, and 4500 revolutions was used. Unsupervised classification techniques were combined to assess the need to perform separated prediction models for each species, and multivariable regression techniques were used to establish such prediction models. It was possible to develop models with a high goodness of fit using paper density as the independent variable (or predictor for all variables except tear index and zero-span tensile strength, both dry and wet.

  3. Thermodynamic modeling of saturated liquid compositions and densities for asymmetric binary systems composed of carbon dioxide, alkanes and alkanols

    International Nuclear Information System (INIS)

    Bayestehparvin, Bita; Nourozieh, Hossein; Kariznovi, Mohammad; Abedi, Jalal

    2015-01-01

    Highlights: • Phase behavior of the binary systems containing largely different components. • Equation of state modeling of binary polar and non-polar systems by utilizing different mixing rules. • Three different mixing rules (one-parameter, two-parameters and Wong–Sandler) coupled with Peng–Robinson equation of state. • Two-parameter mixing rule shows promoting results compared to one-parameter mixing rule. • Wong–Sandler mixing rule is unable to predict saturated liquid densities with sufficient accuracy. - Abstract: The present study mainly focuses on the phase behavior modeling of asymmetric binary mixtures. Capability of different mixing rules and volume shift in the prediction of solubility and saturated liquid density has been investigated. Different binary systems of (alkane + alkanol), (alkane + alkane), (carbon dioxide + alkanol), and (carbon dioxide + alkane) are considered. The composition and the density of saturated liquid phase at equilibrium condition are the properties of interest. Considering composition and saturated liquid density of different binary systems, three main objectives are investigated. First, three different mixing rules (one-parameter, two parameters and Wong–Sandler) coupled with Peng–Robinson equation of state were used to predict the equilibrium properties. The Wong–Sandler mixing rule was utilized with the non-random two-liquid (NRTL) model. Binary interaction coefficients and NRTL model parameters were optimized using the Levenberg–Marquardt algorithm. Second, to improve the density prediction, the volume translation technique was applied. Finally, Two different approaches were considered to tune the equation of state; regression of experimental equilibrium compositions and densities separately and spontaneously. The modeling results show that there is no superior mixing rule which can predict the equilibrium properties for different systems. Two-parameter and Wong–Sandler mixing rule show promoting

  4. Integrated predictive modeling of high-mode tokamak plasmas using a combination of core and pedestal models

    International Nuclear Information System (INIS)

    Bateman, Glenn; Bandres, Miguel A.; Onjun, Thawatchai; Kritz, Arnold H.; Pankin, Alexei

    2003-01-01

    A new integrated modeling protocol is developed using a model for the temperature and density pedestal at the edge of high-mode (H-mode) plasmas [Onjun et al., Phys. Plasmas 9, 5018 (2002)] together with the Multi-Mode core transport model (MMM95) [Bateman et al., Phys. Plasmas 5, 1793 (1998)] in the BALDUR integrated modeling code to predict the temperature and density profiles of 33 H-mode discharges. The pedestal model is used to provide the boundary conditions in the simulations, once the heating power rises above the H-mode power threshold. Simulations are carried out for 20 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. These discharges include systematic scans in normalized gyroradius, plasma pressure, collisionality, isotope mass, elongation, heating power, and plasma density. The average rms deviation between experimental data and the predicted profiles of temperature and density, normalized by central values, is found to be about 10%. It is found that the simulations tend to overpredict the temperature profiles in discharges with low heating power per plasma particle and to underpredict the temperature profiles in discharges with high heating power per particle. Variations of the pedestal model are used to test the sensitivity of the simulation results

  5. Multiple model cardinalized probability hypothesis density filter

    Science.gov (United States)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  6. Sparse Density, Leaf-Off Airborne Laser Scanning Data in Aboveground Biomass Component Prediction

    Directory of Open Access Journals (Sweden)

    Ville Kankare

    2015-05-01

    Full Text Available The demand for cost-efficient forest aboveground biomass (AGB prediction methods is growing worldwide. The National Land Survey of Finland (NLS began collecting airborne laser scanning (ALS data throughout Finland in 2008 to provide a new high-detailed terrain elevation model. Similar data sets are being collected in an increasing number of countries worldwide. These data sets offer great potential in forest mapping related applications. The objectives of our study were (i to evaluate the AGB component prediction accuracy at a resolution of 300 m2 using sparse density, leaf-off ALS data (collected by NLS derived metrics as predictor variables; (ii to compare prediction accuracies with existing large-scale forest mapping techniques (Multi-source National Forest Inventory, MS-NFI based on Landsat TM satellite imagery; and (iii to evaluate the accuracy and effect of canopy height model (CHM derived metrics on AGB component prediction when ALS data were acquired with multiple sensors and varying scanning parameters. Results showed that ALS point metrics can be used to predict component AGBs with an accuracy of 29.7%–48.3%. AGB prediction accuracy was slightly improved using CHM-derived metrics but CHM metrics had a more clear effect on the estimated bias. Compared to the MS-NFI, the prediction accuracy was considerably higher, which was caused by differences in the remote sensing data utilized.

  7. Updated climatological model predictions of ionospheric and HF propagation parameters

    International Nuclear Information System (INIS)

    Reilly, M.H.; Rhoads, F.J.; Goodman, J.M.; Singh, M.

    1991-01-01

    The prediction performances of several climatological models, including the ionospheric conductivity and electron density model, RADAR C, and Ionospheric Communications Analysis and Predictions Program, are evaluated for different regions and sunspot number inputs. Particular attention is given to the near-real-time (NRT) predictions associated with single-station updates. It is shown that a dramatic improvement can be obtained by using single-station ionospheric data to update the driving parameters for an ionospheric model for NRT predictions of f(0)F2 and other ionospheric and HF circuit parameters. For middle latitudes, the improvement extends out thousands of kilometers from the update point to points of comparable corrected geomagnetic latitude. 10 refs

  8. Calculation of the effects of pumping, divertor configuration and fueling on density limit in a tokamak model problem

    International Nuclear Information System (INIS)

    Stacey, W. M.

    2001-01-01

    Several series of model problem calculations have been performed to investigate the predicted effect of pumping, divertor configuration and fueling on the maximum achievable density in diverted tokamaks. Density limitations due to thermal instabilities (confinement degradation and multifaceted axisymmetric radiation from the edge) and to divertor choking are considered. For gas fueling the maximum achievable density is relatively insensitive to pumping (on or off), to the divertor configuration (open or closed), or to the location of the gas injection, although the gas fueling rate required to achieve this maximum achievable density is quite sensitive to these choices. Thermal instabilities are predicted to limit the density at lower values than divertor choking. Higher-density limits are predicted for pellet injection than for gas fueling

  9. Transverse charge and magnetization densities: Improved chiral predictions down to b=1 fms

    Energy Technology Data Exchange (ETDEWEB)

    Alarcon, Jose Manuel [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Hiller Blin, Astrid N. [Johannes Gutenberg Univ., Mainz (Germany); Vicente Vacas, Manuel J. [Spanish National Research Council (CSIC), Valencia (Spain). Univ. of Valencia (UV), Inst. de Fisica Corpuscular; Weiss, Christian [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2018-03-01

    The transverse charge and magnetization densities provide insight into the nucleon’s inner structure. In the periphery, the isovector components are clearly dominant, and can be computed in a model-independent way by means of a combination of chiral effective field theory (cEFT) and dispersion analysis. With a novel N=D method, we incorporate the pion electromagnetic formfactor data into the cEFT calculation, thus taking into account the pion-rescattering effects and r-meson pole. As a consequence, we are able to reliably compute the densities down to distances b1 fm, therefore achieving a dramatic improvement of the results compared to traditional cEFT calculations, while remaining predictive and having controlled uncertainties.

  10. Model comparison on genomic predictions using high-density markers for different groups of bulls in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Su, Guosheng; Janss, Luc

    2013-01-01

    This study compared genomic predictions based on imputed high-density markers (~777,000) in the Nordic Holstein population using a genomic BLUP (GBLUP) model, 4 Bayesian exponential power models with different shape parameters (0.3, 0.5, 0.8, and 1.0) for the exponential power distribution...... relationship with the training population. Groupsmgs had both the sire and the maternal grandsire (MGS), Groupsire only had the sire, Groupmgs only had the MGS, and Groupnon had neither the sire nor the MGS in the training population. Reliability of DGV was measured as the squared correlation between DGV...... and DRP divided by the reliability of DRP for the bulls in validation data set. Unbiasedness of DGV was measured as the regression of DRP on DGV. The results indicated that DGV were more accurate and less biased for animals that were more related to the training population. In general, the Bayesian...

  11. Predictive Uncertainty Estimation in Water Demand Forecasting Using the Model Conditional Processor

    Directory of Open Access Journals (Sweden)

    Amos O. Anele

    2018-04-01

    Full Text Available In a previous paper, a number of potential models for short-term water demand (STWD prediction have been analysed to find the ones with the best fit. The results obtained in Anele et al. (2017 showed that hybrid models may be considered as the accurate and appropriate forecasting models for STWD prediction. However, such best single valued forecast does not guarantee reliable and robust decisions, which can be properly obtained via model uncertainty processors (MUPs. MUPs provide an estimate of the full predictive densities and not only the single valued expected prediction. Amongst other MUPs, the purpose of this paper is to use the multi-variate version of the model conditional processor (MCP, proposed by Todini (2008, to demonstrate how the estimation of the predictive probability conditional to a number of relatively good predictive models may improve our knowledge, thus reducing the predictive uncertainty (PU when forecasting into the unknown future. Through the MCP approach, the probability distribution of the future water demand can be assessed depending on the forecast provided by one or more deterministic forecasting models. Based on an average weekly data of 168 h, the probability density of the future demand is built conditional on three models’ predictions, namely the autoregressive-moving average (ARMA, feed-forward back propagation neural network (FFBP-NN and hybrid model (i.e., combined forecast from ARMA and FFBP-NN. The results obtained show that MCP may be effectively used for real-time STWD prediction since it brings out the PU connected to its forecast, and such information could help water utilities estimate the risk connected to a decision.

  12. A Weakly Nonlinear Model for the Damping of Resonantly Forced Density Waves in Dense Planetary Rings

    Science.gov (United States)

    Lehmann, Marius; Schmidt, Jürgen; Salo, Heikki

    2016-10-01

    In this paper, we address the stability of resonantly forced density waves in dense planetary rings. Goldreich & Tremaine have already argued that density waves might be unstable, depending on the relationship between the ring’s viscosity and the surface mass density. In the recent paper Schmidt et al., we have pointed out that when—within a fluid description of the ring dynamics—the criterion for viscous overstability is satisfied, forced spiral density waves become unstable as well. In this case, linear theory fails to describe the damping, but nonlinearity of the underlying equations guarantees a finite amplitude and eventually a damping of the wave. We apply the multiple scale formalism to derive a weakly nonlinear damping relation from a hydrodynamical model. This relation describes the resonant excitation and nonlinear viscous damping of spiral density waves in a vertically integrated fluid disk with density dependent transport coefficients. The model consistently predicts density waves to be (linearly) unstable in a ring region where the conditions for viscous overstability are met. Sufficiently far away from the Lindblad resonance, the surface mass density perturbation is predicted to saturate to a constant value due to nonlinear viscous damping. The wave’s damping lengths of the model depend on certain input parameters, such as the distance to the threshold for viscous overstability in parameter space and the ground state surface mass density.

  13. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  14. A density model based on the Modified Quasichemical Model and applied to the (NaCl + KCl + ZnCl2) liquid

    International Nuclear Information System (INIS)

    Ouzilleau, Philippe; Robelin, Christian; Chartrand, Patrice

    2012-01-01

    Highlights: ► A model for the density of multicomponent inorganic liquids. ► The density model is based on the Modified Quasichemical Model. ► Application to the (NaCl + KCl + ZnCl 2 ) ternary liquid. ► A Kohler–Toop-like asymmetric interpolation method was used. - Abstract: A theoretical model for the density of multicomponent inorganic liquids based on the Modified Quasichemical Model has been presented previously. By introducing in the Gibbs free energy of the liquid phase temperature-dependent molar volume expressions for the pure components and pressure-dependent excess parameters for the binary (and sometimes higher-order) interactions, it is possible to reproduce, and eventually predict, the molar volume and the density of the multicomponent liquid phase using standard interpolation methods. In the present article, this density model is applied to the (NaCl + KCl + ZnCl 2 ) ternary liquid and a Kohler–Toop-like asymmetric interpolation method is used. All available density data for the (NaCl + KCl + ZnCl 2 ) liquid were collected and critically evaluated, and optimized pressure-dependent model parameters have been found. This new volumetric model can be used with Gibbs free energy minimization software, to calculate the molar volume and the density of (NaCl + KCl + ZnCl 2 ) ternary melts.

  15. High-Density Lipoprotein Cholesterol, Blood Urea Nitrogen, and Serum Creatinine Can Predict Severe Acute Pancreatitis.

    Science.gov (United States)

    Hong, Wandong; Lin, Suhan; Zippi, Maddalena; Geng, Wujun; Stock, Simon; Zimmer, Vincent; Xu, Chunfang; Zhou, Mengtao

    2017-01-01

    Early prediction of disease severity of acute pancreatitis (AP) would be helpful for triaging patients to the appropriate level of care and intervention. The aim of the study was to develop a model able to predict Severe Acute Pancreatitis (SAP). A total of 647 patients with AP were enrolled. The demographic data, hematocrit, High-Density Lipoprotein Cholesterol (HDL-C) determinant at time of admission, Blood Urea Nitrogen (BUN), and serum creatinine (Scr) determinant at time of admission and 24 hrs after hospitalization were collected and analyzed statistically. Multivariate logistic regression indicated that HDL-C at admission and BUN and Scr at 24 hours (hrs) were independently associated with SAP. A logistic regression function (LR model) was developed to predict SAP as follows: -2.25-0.06 HDL-C (mg/dl) at admission + 0.06 BUN (mg/dl) at 24 hours + 0.66 Scr (mg/dl) at 24 hours. The optimism-corrected c-index for LR model was 0.832 after bootstrap validation. The area under the receiver operating characteristic curve for LR model for the prediction of SAP was 0.84. The LR model consists of HDL-C at admission and BUN and Scr at 24 hours, representing an additional tool to stratify patients at risk of SAP.

  16. Combinatorial nuclear level-density model

    International Nuclear Information System (INIS)

    Uhrenholt, H.; Åberg, S.; Dobrowolski, A.; Døssing, Th.; Ichikawa, T.; Möller, P.

    2013-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level-distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: level spacings at neutron separation energy, data on total level-density functions from the Oslo method, cumulative level densities from low-lying discrete states, and data on parity ratios. Spherical and deformed nuclei follow basically different coupling schemes, and we focus on deformed nuclei

  17. Prediction of lung density changes after radiotherapy by cone beam computed tomography response markers and pre-treatment factors for non-small cell lung cancer patients.

    Science.gov (United States)

    Bernchou, Uffe; Hansen, Olfred; Schytte, Tine; Bertelsen, Anders; Hope, Andrew; Moseley, Douglas; Brink, Carsten

    2015-10-01

    This study investigates the ability of pre-treatment factors and response markers extracted from standard cone-beam computed tomography (CBCT) images to predict the lung density changes induced by radiotherapy for non-small cell lung cancer (NSCLC) patients. Density changes in follow-up computed tomography scans were evaluated for 135 NSCLC patients treated with radiotherapy. Early response markers were obtained by analysing changes in lung density in CBCT images acquired during the treatment course. The ability of pre-treatment factors and CBCT markers to predict lung density changes induced by radiotherapy was investigated. Age and CBCT markers extracted at 10th, 20th, and 30th treatment fraction significantly predicted lung density changes in a multivariable analysis, and a set of response models based on these parameters were established. The correlation coefficient for the models was 0.35, 0.35, and 0.39, when based on the markers obtained at the 10th, 20th, and 30th fraction, respectively. The study indicates that younger patients without lung tissue reactions early into their treatment course may have minimal radiation induced lung density increase at follow-up. Further investigations are needed to examine the ability of the models to identify patients with low risk of symptomatic toxicity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  19. A Bayesian antedependence model for whole genome prediction.

    Science.gov (United States)

    Yang, Wenzhao; Tempelman, Robert J

    2012-04-01

    Hierarchical mixed effects models have been demonstrated to be powerful for predicting genomic merit of livestock and plants, on the basis of high-density single-nucleotide polymorphism (SNP) marker panels, and their use is being increasingly advocated for genomic predictions in human health. Two particularly popular approaches, labeled BayesA and BayesB, are based on specifying all SNP-associated effects to be independent of each other. BayesB extends BayesA by allowing a large proportion of SNP markers to be associated with null effects. We further extend these two models to specify SNP effects as being spatially correlated due to the chromosomally proximal effects of causal variants. These two models, that we respectively dub as ante-BayesA and ante-BayesB, are based on a first-order nonstationary antedependence specification between SNP effects. In a simulation study involving 20 replicate data sets, each analyzed at six different SNP marker densities with average LD levels ranging from r(2) = 0.15 to 0.31, the antedependence methods had significantly (P 0. 24) with differences exceeding 3%. A cross-validation study was also conducted on the heterogeneous stock mice data resource (http://mus.well.ox.ac.uk/mouse/HS/) using 6-week body weights as the phenotype. The antedependence methods increased cross-validation prediction accuracies by up to 3.6% compared to their classical counterparts (P benchmark data sets and demonstrated that the antedependence methods were more accurate than their classical counterparts for genomic predictions, even for individuals several generations beyond the training data.

  20. Multivariate power-law models for streamflow prediction in the Mekong Basin

    Directory of Open Access Journals (Sweden)

    Guillaume Lacombe

    2014-11-01

    New hydrological insights for the region: A combination of 3–6 explanatory variables – chosen among annual rainfall, drainage area, perimeter, elevation, slope, drainage density and latitude – is sufficient to predict a range of flow metrics with a prediction R-squared ranging from 84 to 95%. The inclusion of forest or paddy percentage coverage as an additional explanatory variable led to slight improvements in the predictive power of some of the low-flow models (lowest prediction R-squared = 89%. A physical interpretation of the model structure was possible for most of the resulting relationships. Compared to regional regression models developed in other parts of the world, this new set of equations performs reasonably well.

  1. NOx, Soot, and Fuel Consumption Predictions under Transient Operating Cycle for Common Rail High Power Density Diesel Engines

    Directory of Open Access Journals (Sweden)

    N. H. Walke

    2016-01-01

    Full Text Available Diesel engine is presently facing the challenge of controlling NOx and soot emissions on transient cycles, to meet stricter emission norms and to control emissions during field operations. Development of a simulation tool for NOx and soot emissions prediction on transient operating cycles has become the most important objective, which can significantly reduce the experimentation time and cost required for tuning these emissions. Hence, in this work, a 0D comprehensive predictive model has been formulated with selection and coupling of appropriate combustion and emissions models to engine cycle models. Selected combustion and emissions models are further modified to improve their prediction accuracy in the full operating zone. Responses of the combustion and emissions models have been validated for load and “start of injection” changes. Model predicted transient fuel consumption, air handling system parameters, and NOx and soot emissions are in good agreement with measured data on a turbocharged high power density common rail engine for the “nonroad transient cycle” (NRTC. It can be concluded that 0D models can be used for prediction of transient emissions on modern engines. How the formulated approach can also be extended to transient emissions prediction for other applications and fuels is also discussed.

  2. Modeling high-density-plasma deposition of SiO{sub 2} in SiH{sub 4}/O{sub 2}/Ar

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, E.; Larson, R.S. [Sandia National Labs., Livermore, CA (United States); Ho, P.; Apblett, C. [Sandia National Labs., Albuquerque, NM (United States); Han, S.M.; Edelberg, E.; Aydil, E. [Univ. of California, Santa Barbara, CA (United States)

    1997-03-01

    The authors have compiled sets of gas-phase and surface reactions for use in modeling plasma-enhanced chemical vapor deposition of silicon dioxide from silane, oxygen and argon gas mixtures in high-density-plasma reactors. They have applied the reaction mechanisms to modeling three different kinds of high-density plasma deposition chambers, and tested them by comparing model predictions to a variety of experimental measurements. The model simulates a well mixed reactor by solving global conservation equations averaged across the reactor volume. The gas-phase reaction mechanism builds from fundamental electron-impact cross section data available in the literature, and also includes neutral-molecule, ion-ion, and ion-molecule reaction paths. The surface reaction mechanism is based on insight from attenuated total-reflection Fourier-transform infrared spectroscopy experiments. This mechanism describes the adsorption of radical species on an oxide surface, ion-enhanced reactions leading to species desorption from the surface layer, radical abstractions competing for surface sites, and direct energy-dependent ion sputtering of the oxide material. Experimental measurements of total ion densities, relative radical densities as functions of plasma operating conditions, and net deposition-rate have been compared to model predictions to test and modify the chemical kinetics mechanisms. Results show good quantitative agreement between model predictions and experimental measurements.

  3. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  4. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    Science.gov (United States)

    Dunne, M. G.; McCarthy, P. J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.; the ASDEX Upgrade Team

    2012-12-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications.

  5. Measurement of neoclassically predicted edge current density at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Dunne, M.G.; McCarthy, P.J.; Wolfrum, E.; Fischer, R.; Giannone, L.; Burckhart, A.

    2012-01-01

    Experimental confirmation of neoclassically predicted edge current density in an ELMy H-mode plasma is presented. Current density analysis using the CLISTE equilibrium code is outlined and the rationale for accuracy of the reconstructions is explained. Sample profiles and time traces from analysis of data at ASDEX Upgrade are presented. A high time resolution is possible due to the use of an ELM-synchronization technique. Additionally, the flux-surface-averaged current density is calculated using a neoclassical approach. Results from these two separate methods are then compared and are found to validate the theoretical formula. Finally, several discharges are compared as part of a fuelling study, showing that the size and width of the edge current density peak at the low-field side can be explained by the electron density and temperature drives and their respective collisionality modifications. (paper)

  6. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  7. Thermospheric density and satellite drag modeling

    Science.gov (United States)

    Mehta, Piyush Mukesh

    The United States depends heavily on its space infrastructure for a vast number of commercial and military applications. Space Situational Awareness (SSA) and Threat Assessment require maintaining accurate knowledge of the orbits of resident space objects (RSOs) and the associated uncertainties. Atmospheric drag is the largest source of uncertainty for low-perigee RSOs. The uncertainty stems from inaccurate modeling of neutral atmospheric mass density and inaccurate modeling of the interaction between the atmosphere and the RSO. In order to reduce the uncertainty in drag modeling, both atmospheric density and drag coefficient (CD) models need to be improved. Early atmospheric density models were developed from orbital drag data or observations of a few early compact satellites. To simplify calculations, densities derived from orbit data used a fixed CD value of 2.2 measured in a laboratory using clean surfaces. Measurements from pressure gauges obtained in the early 1990s have confirmed the adsorption of atomic oxygen on satellite surfaces. The varying levels of adsorbed oxygen along with the constantly changing atmospheric conditions cause large variations in CD with altitude and along the orbit of the satellite. Therefore, the use of a fixed CD in early development has resulted in large biases in atmospheric density models. A technique for generating corrections to empirical density models using precision orbit ephemerides (POE) as measurements in an optimal orbit determination process was recently developed. The process generates simultaneous corrections to the atmospheric density and ballistic coefficient (BC) by modeling the corrections as statistical exponentially decaying Gauss-Markov processes. The technique has been successfully implemented in generating density corrections using the CHAMP and GRACE satellites. This work examines the effectiveness, specifically the transfer of density models errors into BC estimates, of the technique using the CHAMP and

  8. Geometry optimization method versus predictive ability in QSPR modeling for ionic liquids

    Science.gov (United States)

    Rybinska, Anna; Sosnowska, Anita; Barycki, Maciej; Puzyn, Tomasz

    2016-02-01

    Computational techniques, such as Quantitative Structure-Property Relationship (QSPR) modeling, are very useful in predicting physicochemical properties of various chemicals. Building QSPR models requires calculating molecular descriptors and the proper choice of the geometry optimization method, which will be dedicated to specific structure of tested compounds. Herein, we examine the influence of the ionic liquids' (ILs) geometry optimization methods on the predictive ability of QSPR models by comparing three models. The models were developed based on the same experimental data on density collected for 66 ionic liquids, but with employing molecular descriptors calculated from molecular geometries optimized at three different levels of the theory, namely: (1) semi-empirical (PM7), (2) ab initio (HF/6-311+G*) and (3) density functional theory (B3LYP/6-311+G*). The model in which the descriptors were calculated by using ab initio HF/6-311+G* method indicated the best predictivity capabilities ({{Q}}_{{EXT}}2 = 0.87). However, PM7-based model has comparable values of quality parameters ({{Q}}_{{EXT}}2 = 0.84). Obtained results indicate that semi-empirical methods (faster and less expensive regarding CPU time) can be successfully employed to geometry optimization in QSPR studies for ionic liquids.

  9. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

      The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  10. Gravitational form factors and angular momentum densities in light-front quark-diquark model

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Narinder [Indian Institute of Technology Kanpur, Department of Physics, Kanpur (India); Mondal, Chandan [Chinese Academy of Sciences, Institute of Modern Physics, Lanzhou (China); Sharma, Neetika [I K Gujral Punjab Technical University, Department of Physical Sciences, Jalandhar, Punjab (India); Panjab University, Department of Physics, Chandigarh (India)

    2017-12-15

    We investigate the gravitational form factors (GFFs) and the longitudinal momentum densities (p{sup +} densities) for proton in a light-front quark-diquark model. The light-front wave functions are constructed from the soft-wall AdS/QCD prediction. The contributions from both the scalar and the axial vector diquarks are considered here. The results are compared with the consequences of a parametrization of nucleon generalized parton distributions (GPDs) in the light of recent MRST measurements of parton distribution functions (PDFs) and a soft-wall AdS/QCD model. The spatial distribution of angular momentum for up and down quarks inside the nucleon has been presented. At the density level, we illustrate different definitions of angular momentum explicitly for an up and down quark in the light-front quark-diquark model inspired by AdS/QCD. (orig.)

  11. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  12. Experimental measurements and prediction of liquid densities for n-alkane mixtures

    International Nuclear Information System (INIS)

    Ramos-Estrada, Mariana; Iglesias-Silva, Gustavo A.; Hall, Kenneth R.

    2006-01-01

    We present experimental liquid densities for n-pentane, n-hexane and n-heptane and their binary mixtures from (273.15 to 363.15) K over the entire composition range (for the mixtures) at atmospheric pressure. A vibrating tube densimeter produces the experimental densities. Also, we present a generalized correlation to predict the liquid densities of n-alkanes and their mixtures. We have combined the principle of congruence with the Tait equation to obtain an equation that uses as variables: temperature, pressure and the equivalent carbon number of the mixture. Also, we present a generalized correlation for the atmospheric liquid densities of n-alkanes. The average absolute percentage deviation of this equation from the literature experimental density values is 0.26%. The Tait equation has an average percentage deviation of 0.15% from experimental density measurements

  13. Predicting oak density with ecological, physical, and soil indicators

    Science.gov (United States)

    Callie Jo Schweitzer; Adrian A. Lesak; Yong Wang

    2006-01-01

    We predicted density of oak species in the mid-Cumberland Plateau region of northeastern Alabama on the basis of basal area of tree associations based on light tolerances, physical site characteristics, and soil type. Tree basal area was determined for four species groups: oaks (Quercus spp.), hickories (Carya spp.), yellow-poplar...

  14. Thermospheric mass density model error variance as a function of time scale

    Science.gov (United States)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  15. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    Science.gov (United States)

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (conservation actions. PMID:29579129

  16. Calibration plots for risk prediction models in the presence of competing risks

    DEFF Research Database (Denmark)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-01-01

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks...... prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves...

  17. Low bone mineral density in noncholestatic liver cirrhosis: prevalence, severity and prediction

    Directory of Open Access Journals (Sweden)

    Figueiredo Fátima Aparecida Ferreira

    2003-01-01

    Full Text Available BACKGROUND: Metabolic bone disease has long been associated with cholestatic disorders. However, data in noncholestatic cirrhosis are relatively scant. AIMS: To determine prevalence and severity of low bone mineral density in noncholestatic cirrhosis and to investigate whether age, gender, etiology, severity of underlying liver disease, and/or laboratory tests are predictive of the diagnosis. PATIENTS/METHODS: Between March and September/1998, 89 patients with noncholestatic cirrhosis and 20 healthy controls were enrolled in a cross-sectional study. All subjects underwent standard laboratory tests and bone densitometry at lumbar spine and femoral neck by dual X-ray absorptiometry. RESULTS: Bone mass was significantly reduced at both sites in patients compared to controls. The prevalence of low bone mineral density in noncholestatic cirrhosis, defined by the World Health Organization criteria, was 78% at lumbar spine and 71% at femoral neck. Bone density significantly decreased with age at both sites, especially in patients older than 50 years. Bone density was significantly lower in post-menopausal women patients compared to pre-menopausal and men at both sites. There was no significant difference in bone mineral density among noncholestatic etiologies. Lumbar spine bone density significantly decreased with the progression of liver dysfunction. No biochemical variable was significantly associated with low bone mineral density. CONCLUSIONS: Low bone mineral density is highly prevalent in patients with noncholestatic cirrhosis. Older patients, post-menopausal women and patients with severe hepatic dysfunction experienced more advanced bone disease. The laboratory tests routinely determined in patients with liver disease did not reliably predict low bone mineral density.

  18. Charge and transition densities of samarium isotopes in the interacting Boson model

    International Nuclear Information System (INIS)

    Moinester, M.A.; Alster, J.; Dieperink, A.E.L.

    1982-01-01

    The interacting boson approximation (IBA) model has been used to interpret the ground-state charge distributions and lowest 2 + transition charge densities of the even samarium isotopes for A = 144-154. Phenomenological boson transition densities associated with the nucleons comprising the s-and d-bosons of the IBA were determined via a least squares fit analysis of charge and transition densities in the Sm isotopes. The application of these boson trasition densities to higher excited 0 + and 2 + states of Sm, and to 0 + and 2 + transitions in neighboring nuclei, such as Nd and Gd, is described. IBA predictions for the transition densities of the three lowest 2 + levels of 154 Gd are given and compared to theoretical transition densities based on Hartree-Fock calculations. The deduced quadrupole boson transition densities are in fair agreement with densities derived previously from 150 Nd data. It is also shown how certain moments of the best fit boson transition densities can simply and sucessfully describe rms radii, isomer shifts, B(E2) strengths, and transition radii for the Sm isotopes. (orig.)

  19. Insect density-plant density relationships: a modified view of insect responses to resource concentrations.

    Science.gov (United States)

    Andersson, Petter; Löfstedt, Christer; Hambäck, Peter A

    2013-12-01

    Habitat area is an important predictor of spatial variation in animal densities. However, the area often correlates with the quantity of resources within habitats, complicating our understanding of the factors shaping animal distributions. We addressed this problem by investigating densities of insect herbivores in habitat patches with a constant area but varying numbers of plants. Using a mathematical model, predictions of scale-dependent immigration and emigration rates for insects into patches with different densities of host plants were derived. Moreover, a field experiment was conducted where the scaling properties of odour-mediated attraction in relation to the number of odour sources were estimated, in order to derive a prediction of immigration rates of olfactory searchers. The theoretical model predicted that we should expect immigration rates of contact and visual searchers to be determined by patch area, with a steep scaling coefficient, μ = -1. The field experiment suggested that olfactory searchers should show a less steep scaling coefficient, with μ ≈ -0.5. A parameter estimation and analysis of published data revealed a correspondence between observations and predictions, and density-variation among groups could largely be explained by search behaviour. Aphids showed scaling coefficients corresponding to the prediction for contact/visual searchers, whereas moths, flies and beetles corresponded to the prediction for olfactory searchers. As density responses varied considerably among groups, and variation could be explained by a certain trait, we conclude that a general theory of insect responses to habitat heterogeneity should be based on shared traits, rather than a general prediction for all species.

  20. Theoretical prediction of low-density hexagonal ZnO hollow structures

    Energy Technology Data Exchange (ETDEWEB)

    Tuoc, Vu Ngoc, E-mail: tuoc.vungoc@hust.edu.vn [Institute of Engineering Physics, Hanoi University of Science and Technology, 1 Dai Co Viet Road, Hanoi (Viet Nam); Huan, Tran Doan [Institute of Materials Science, University of Connecticut, Storrs, Connecticut 06269-3136 (United States); Thao, Nguyen Thi [Institute of Engineering Physics, Hanoi University of Science and Technology, 1 Dai Co Viet Road, Hanoi (Viet Nam); Hong Duc University, 307 Le Lai, Thanh Hoa City (Viet Nam); Tuan, Le Manh [Hong Duc University, 307 Le Lai, Thanh Hoa City (Viet Nam)

    2016-10-14

    Along with wurtzite and zinc blende, zinc oxide (ZnO) has been found in a large number of polymorphs with substantially different properties and, hence, applications. Therefore, predicting and synthesizing new classes of ZnO polymorphs are of great significance and have been gaining considerable interest. Herein, we perform a density functional theory based tight-binding study, predicting several new series of ZnO hollow structures using the bottom-up approach. The geometry of the building blocks allows for obtaining a variety of hexagonal, low-density nanoporous, and flexible ZnO hollow structures. Their stability is discussed by means of the free energy computed within the lattice-dynamics approach. Our calculations also indicate that all the reported hollow structures are wide band gap semiconductors in the same fashion with bulk ZnO. The electronic band structures of the ZnO hollow structures are finally examined in detail.

  1. Influence of thermal buoyancy on vertical tube bundle thermal density head predictions under transient conditions

    International Nuclear Information System (INIS)

    Lin, H.C.; Kasza, K.E.

    1984-01-01

    The thermal-hydraulic behavior of an LMFBR system under various types of plant transients is usually studied using one-dimensional (1-D) flow and energy transport models of the system components. Many of the transient events involve the change from a high to a low flow with an accompanying change in temperature of the fluid passing through the components which can be conductive to significant thermal bouyancy forces. Thermal bouyancy can exert its influence on system dynamic energy transport predictions through alterations of flow and thermal distributions which in turn can influence decay heat removal, system-response time constants, heat transport between primary and secondary systems, and thermal energy rejection at the reactor heat sink, i.e., the steam generator. In this paper the results from a comparison of a 1-D model prediction and experimental data for vertical tube bundle overall thermal density head and outlet temperature under transient conditions causing varying degrees of thermal bouyancy are presented. These comparisons are being used to generate insight into how, when, and to what degree thermal buoyancy can cause departures from 1-D model predictions

  2. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  3. Modeling relaxation length and density of acacia mangium wood using gamma - ray attenuation technique

    International Nuclear Information System (INIS)

    Tamer A Tabet; Fauziah Abdul Aziz

    2009-01-01

    Wood density measurement is related to the several factors that influence wood quality. In this paper, density, relaxation length and half-thickness value of eight ages, 3, 5, 7, 10, 11, 13 and 15 year-old of Acacia mangium wood were determined using gamma radiation from 137 Cs source. Results show that Acacia mangium tree of age 3 year has the highest relaxation length of 83.33 cm and least density of 0.43 gcm -3 , while the tree of age 15 year has the least Relaxation length of 28.56 cm and highest density of 0.76 gcm -3 . Results also show that the 3 year-old Acacia mangium wood has the highest half thickness value of 57.75 cm and 15 year-old tree has the least half thickness value of 19.85 cm. Two mathematical models have been developed for the prediction of density, variation with relaxation length and half-thickness value of different age of tree. A good agreement (greater than 85% in most cases) was observed between the measured values and predicted ones. Very good linear correlation was found between measured density and the age of tree (R2 = 0.824), and between estimated density and Acacia mangium tree age (R2 = 0.952). (Author)

  4. A new model for predicting moisture uptake by packaged solid pharmaceuticals.

    Science.gov (United States)

    Chen, Y; Li, Y

    2003-04-14

    A novel mathematical model has been developed for predicting moisture uptake by packaged solid pharmaceutical products during storage. High density polyethylene (HDPE) bottles containing the tablet products of two new chemical entities and desiccants are investigated. Permeability of the bottles is determined at different temperatures using steady-state data. Moisture sorption isotherms of the two model drug products and desiccants at the same temperatures are determined and expressed in polynomial equations. The isotherms are used for modeling the time-humidity profile in the container, which enables the prediction of the moisture content of individual component during storage. Predicted moisture contents agree well with real time stability data. The current model could serve as a guide during packaging selection for moisture protection, so as to reduce the cost and cycle time of screening study.

  5. Measurement and modelling of high pressure density and interfacial tension of (gas + n-alkane) binary mixtures

    International Nuclear Information System (INIS)

    Pereira, Luís M.C.; Chapoy, Antonin; Burgass, Rod; Tohidi, Bahman

    2016-01-01

    Highlights: • (Density + IFT) measurements are performed in synthetic reservoir fluids. • Measured systems include CO_2, CH_4 and N_2 with n-decane. • Novel data are reported for temperatures up to 443 K and pressures up to 69 MPa. • Predictive models are tested in 16 (gas + n-alkane) systems. • Best modelling results are achieved with the Density Gradient Theory. - Abstract: The deployment of more efficient and economical extraction methods and processing facilities of oil and gas requires the accurate knowledge of the interfacial tension (IFT) of fluid phases in contact. In this work, the capillary constant a of binary mixtures containing n-decane and common gases such as carbon dioxide, methane and nitrogen was measured. Experimental measurements were carried at four temperatures (313, 343, 393 and 442 K) and pressures up to 69 MPa, or near the complete vaporisation of the organic phase into the gas-rich phase. To determine accurate IFT values, the capillary constants were combined with saturated phase density data measured with an Anton Paar densitometer and correlated with a model based on the Peng–Robinson 1978 equation of state (PR78 EoS). Correlated density showed an overall percentage absolute deviation (%AAD) to measured data of (0.2 to 0.5)% for the liquid phase and (1.5 to 2.5)% for the vapour phase of the studied systems and P–T conditions. The predictive capability of models to accurately describe both the temperature and pressure dependence of the saturated phase density and IFT of 16 (gas + n-alkane) binary mixtures was assessed in this work by comparison with data gathered from the literature and measured in this work. The IFT models considered include the Parachor, the Linear Gradient Theory (LGT) and the Density Gradient Theory (DGT) approaches combined with the Volume-Translated Predictive Peng–Robinson 1978 EoS (VT-PPR78 EoS). With no adjustable parameters, the VT-PPR78 EoS allowed a good description of both solubility and

  6. Allometric scaling of population variance with mean body size is predicted from Taylor's law and density-mass allometry.

    Science.gov (United States)

    Cohen, Joel E; Xu, Meng; Schuster, William S F

    2012-09-25

    Two widely tested empirical patterns in ecology are combined here to predict how the variation of population density relates to the average body size of organisms. Taylor's law (TL) asserts that the variance of the population density of a set of populations is a power-law function of the mean population density. Density-mass allometry (DMA) asserts that the mean population density of a set of populations is a power-law function of the mean individual body mass. Combined, DMA and TL predict that the variance of the population density is a power-law function of mean individual body mass. We call this relationship "variance-mass allometry" (VMA). We confirmed the theoretically predicted power-law form and the theoretically predicted parameters of VMA, using detailed data on individual oak trees (Quercus spp.) of Black Rock Forest, Cornwall, New York. These results connect the variability of population density to the mean body mass of individuals.

  7. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  8. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  9. Models for Predicting Boundary Conditions in L-Mode Tokamak Plasma

    International Nuclear Information System (INIS)

    Siriwitpreecha, A.; Onjun, T.; Suwanna, S.; Poolyarat, N.; Picha, R.

    2009-07-01

    Full text: The models for predicting temperature and density of ions and electrons at boundary conditions in L-mode tokamak plasma are developed using an empirical approach and optimized against the experimental data obtained from the latest public version of the International Pedestal Database (version 3.2). It is assumed that the temperature and density at boundary of L-mode plasma are functions of engineering parameters such as plasma current, toroidal magnetic field, total heating power, line averaged density, hydrogenic particle mass (A H ), major radius, minor radius, and elongation at the separatrix. Multiple regression analysis is carried out for these parameters with 86 data points in L-mode from Aug (61) and JT60U (25). The RMSE of temperature and density at boundary of L-mode plasma are found to be 24.41% and 18.81%, respectively. These boundary models are implemented in BALDUR code, which will be used to simulate the L-mode plasma in the tokamak

  10. Detailed physical properties prediction of pure methyl esters for biodiesel combustion modeling

    International Nuclear Information System (INIS)

    An, H.; Yang, W.M.; Maghbouli, A.; Chou, S.K.; Chua, K.J.

    2013-01-01

    Highlights: ► Group contribution methods from molecular level have been used for the prediction. ► Complete prediction of the physical properties for 5 methyl esters has been done. ► The predicted results can be very useful for biodiesel combustion modeling. ► Various models have been compared and the best model has been identified. ► Predicted properties are over large temperature ranges with excellent accuracies. -- Abstract: In order to accurately simulate the fuel spray, atomization, combustion and emission formation processes of a diesel engine fueled with biodiesel, adequate knowledge of biodiesel’s physical properties is desired. The objective of this work is to do a detailed physical properties prediction for the five major methyl esters of biodiesel for combustion modeling. The physical properties considered in this study are: normal boiling point, critical properties, vapor pressure, and latent heat of vaporization, liquid density, liquid viscosity, liquid thermal conductivity, gas diffusion coefficients and surface tension. For each physical property, the best prediction model has been identified, and very good agreements have been obtained between the predicted results and the published data where available. The calculated results can be used as key references for biodiesel combustion modeling.

  11. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

    Science.gov (United States)

    Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob

    2018-04-01

    Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.

  12. PVT characterization and viscosity modeling and prediction of crude oils

    DEFF Research Database (Denmark)

    Cisneros, Eduardo Salvador P.; Dalberg, Anders; Stenby, Erling Halfdan

    2004-01-01

    In previous works, the general, one-parameter friction theory (f-theory), models have been applied to the accurate viscosity modeling of reservoir fluids. As a base, the f-theory approach requires a compositional characterization procedure for the application of an equation of state (EOS), in most...... pressure, is also presented. The combination of the mass characterization scheme presented in this work and the f-theory, can also deliver accurate viscosity modeling results. Additionally, depending on how extensive the compositional characterization is, the approach,presented in this work may also...... deliver accurate viscosity predictions. The modeling approach presented in this work can deliver accurate viscosity and density modeling and prediction results over wide ranges of reservoir conditions, including the compositional changes induced by recovery processes such as gas injection....

  13. Re-examining Prostate-specific Antigen (PSA) Density: Defining the Optimal PSA Range and Patients for Using PSA Density to Predict Prostate Cancer Using Extended Template Biopsy.

    Science.gov (United States)

    Jue, Joshua S; Barboza, Marcelo Panizzutti; Prakash, Nachiketh S; Venkatramani, Vivek; Sinha, Varsha R; Pavan, Nicola; Nahar, Bruno; Kanabur, Pratik; Ahdoot, Michael; Dong, Yan; Satyanarayana, Ramgopal; Parekh, Dipen J; Punnen, Sanoj

    2017-07-01

    To compare the predictive accuracy of prostate-specific antigen (PSA) density vs PSA across different PSA ranges and by prior biopsy status in a prospective cohort undergoing prostate biopsy. Men from a prospective trial underwent an extended template biopsy to evaluate for prostate cancer at 26 sites throughout the United States. The area under the receiver operating curve assessed the predictive accuracy of PSA density vs PSA across 3 PSA ranges (10 ng/mL). We also investigated the effect of varying the PSA density cutoffs on the detection of cancer and assessed the performance of PSA density vs PSA in men with or without a prior negative biopsy. Among 1290 patients, 585 (45%) and 284 (22%) men had prostate cancer and significant prostate cancer, respectively. PSA density performed better than PSA in detecting any prostate cancer within a PSA of 4-10 ng/mL (area under the receiver operating characteristic curve [AUC]: 0.70 vs 0.53, P PSA >10 mg/mL (AUC: 0.84 vs 0.65, P PSA density was significantly more predictive than PSA in detecting any prostate cancer in men without (AUC: 0.73 vs 0.67, P PSA increases, PSA density becomes a better marker for predicting prostate cancer compared with PSA alone. Additionally, PSA density performed better than PSA in men with a prior negative biopsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Biophysical modelling of intra-ring variations in tracheid features and wood density of Pinus pinaster trees exposed to seasonal droughts.

    Science.gov (United States)

    Wilkinson, Sarah; Ogée, Jérôme; Domec, Jean-Christophe; Rayment, Mark; Wingate, Lisa

    2015-03-01

    Process-based models that link seasonally varying environmental signals to morphological features within tree rings are essential tools to predict tree growth response and commercially important wood quality traits under future climate scenarios. This study evaluated model portrayal of radial growth and wood anatomy observations within a mature maritime pine (Pinus pinaster (L.) Aït.) stand exposed to seasonal droughts. Intra-annual variations in tracheid anatomy and wood density were identified through image analysis and X-ray densitometry on stem cores covering the growth period 1999-2010. A cambial growth model was integrated with modelled plant water status and sugar availability from the soil-plant-atmosphere transfer model MuSICA to generate estimates of cell number, cell volume, cell mass and wood density on a weekly time step. The model successfully predicted inter-annual variations in cell number, ring width and maximum wood density. The model was also able to predict the occurrence of special anatomical features such as intra-annual density fluctuations (IADFs) in growth rings. Since cell wall thickness remained surprisingly constant within and between growth rings, variations in wood density were primarily the result of variations in lumen diameter, both in the model and anatomical data. In the model, changes in plant water status were identified as the main driver of the IADFs through a direct effect on cell volume. The anatomy data also revealed that a trade-off existed between hydraulic safety and hydraulic efficiency. Although a simplified description of cambial physiology is presented, this integrated modelling approach shows potential value for identifying universal patterns of tree-ring growth and anatomical features over a broad climatic gradient. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Prostate specific antigen and acinar density: a new dimension, the "Prostatocrit".

    Science.gov (United States)

    Robinson, Simon; Laniado, Marc; Montgomery, Bruce

    2017-01-01

    Prostate-specific antigen densities have limited success in diagnosing prostate cancer. We emphasise the importance of the peripheral zone when considered with its cellular constituents, the "prostatocrit". Using zonal volumes and asymmetry of glandular acini, we generate a peripheral zone acinar volume and density. With the ratio to the whole gland, we can better predict high grade and all grade cancer. We can model the gland into its acinar and stromal elements. This new "prostatocrit" model could offer more accurate nomograms for biopsy. 674 patients underwent TRUS and biopsy. Whole gland and zonal volumes were recorded. We compared ratio and acinar volumes when added to a "clinic" model using traditional PSA density. Univariate logistic regression was used to find significant predictors for all and high grade cancer. Backwards multiple logistic regression was used to generate ROC curves comparing the new model to conventional density and PSA alone. Prediction of all grades of prostate cancer: significant variables revealed four significant "prostatocrit" parameters: log peripheral zone acinar density; peripheral zone acinar volume/whole gland acinar volume; peripheral zone acinar density/whole gland volume; peripheral zone acinar density. Acinar model (AUC 0.774), clinic model (AUC 0.745) (P=0.0105). Prediction of high grade prostate cancer: peripheral zone acinar density ("prostatocrit") was the only significant density predictor. Acinar model (AUC 0.811), clinic model (AUC 0.769) (P=0.0005). There is renewed use for ratio and "prostatocrit" density of the peripheral zone in predicting cancer. This outperforms all traditional density measurements. Copyright® by the International Brazilian Journal of Urology.

  16. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  17. Assessment of Nucleation Site Density Models for CFD Simulations of Subcooled Flow Boiling

    International Nuclear Information System (INIS)

    Hoang, N. H.; Chu, I. C.; Euh, D. J.; Song, C. H.

    2015-01-01

    The framework of a CFD simulation of subcooled flow boiling basically includes a block of wall boiling models communicating with governing equations of a two-phase flow via parameters like temperature, rate of phasic change, etc. In the block of wall boiling models, a heat flux partitioning model, which describes how the heat is taken away from a heated surface, is combined with models quantifying boiling parameters, i.e. nucleation site density, and bubble departure diameter and frequency. It is realized that the nucleation site density is an important parameter for predicting the subcooled flow boiling. The number of nucleation sites per unit area decides the influence region of each heat transfer mechanism. The variation of the nucleation site density will mutually change the dynamics of vapor bubbles formed at these sites. In addition, the nucleation site density is needed as one initial and boundary condition to solve the interfacial area transport equation. A lot of effort has been devoted to mathematically formulate the nucleation site density. As a consequence, numerous correlations of the nucleation site density are available in the literature. These correlations are commonly quite different in their mathematical form as well as application range. Some correlations of the nucleation site density have been applied successfully to CFD simulations of several specific subcooled boiling flows, but in combination with different correlations of the bubble departure diameter and frequency. In addition, the values of the nucleation site density, and bubble departure diameter and frequency obtained from simulations for a same problem are relatively different, depending on which models are used, even when global characteristics, e.g., void fraction and mean bubble diameter, agree well with experimental values. It is realized that having a good CFD simulations of the subcooled flow boiling requires a detailed validations of all the models used. Owing to the importance

  18. Polarizable Density Embedding

    DEFF Research Database (Denmark)

    Reinholdt, Peter; Kongsted, Jacob; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    We analyze the performance of the polarizable density embedding (PDE) model-a new multiscale computational approach designed for prediction and rationalization of general molecular properties of large and complex systems. We showcase how the PDE model very effectively handles the use of large...

  19. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  20. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast-growing Eucalyptus forest plantation using airborne LiDAR data.

    Science.gov (United States)

    Silva, Carlos Alberto; Hudak, Andrew Thomas; Klauberg, Carine; Vierling, Lee Alexandre; Gonzalez-Benecke, Carlos; de Padua Chaves Carvalho, Samuel; Rodriguez, Luiz Carlos Estraviz; Cardil, Adrián

    2017-12-01

    LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m -2 (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m. The results show that LiDAR pulse density of 5 pulses m -2 provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m -2 in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system. LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m -2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.

  1. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    International Nuclear Information System (INIS)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli

    2007-01-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory

  2. Coronary Artery Calcium Volume and Density: Potential Interactions and Overall Predictive Value: The Multi-Ethnic Study of Atherosclerosis.

    Science.gov (United States)

    Criqui, Michael H; Knox, Jessica B; Denenberg, Julie O; Forbang, Nketi I; McClelland, Robyn L; Novotny, Thomas E; Sandfort, Veit; Waalen, Jill; Blaha, Michael J; Allison, Matthew A

    2017-08-01

    This study sought to determine the possibility of interactions between coronary artery calcium (CAC) volume or CAC density with each other, and with age, sex, ethnicity, the new atherosclerotic cardiovascular disease (ASCVD) risk score, diabetes status, and renal function by estimated glomerular filtration rate, and, using differing CAC scores, to determine the improvement over the ASCVD risk score in risk prediction and reclassification. In MESA (Multi-Ethnic Study of Atherosclerosis), CAC volume was positively and CAC density inversely associated with cardiovascular disease (CVD) events. A total of 3,398 MESA participants free of clinical CVD but with prevalent CAC at baseline were followed for incident CVD events. During a median 11.0 years of follow-up, there were 390 CVD events, 264 of which were coronary heart disease (CHD). With each SD increase of ln CAC volume (1.62), risk of CHD increased 73% (p present). In multivariable Cox models, significant interactions were present for CAC volume with age and ASCVD risk score for both CHD and CVD, and CAC density with ASCVD risk score for CVD. Hazard ratios were generally stronger in the lower risk groups. Receiver-operating characteristic area under the curve and Net Reclassification Index analyses showed better prediction by CAC volume than by Agatston, and the addition of CAC density to CAC volume further significantly improved prediction. The inverse association between CAC density and incident CHD and CVD events is robust across strata of other CVD risk factors. Added to the ASCVD risk score, CAC volume and density provided the strongest prediction for CHD and CVD events, and the highest correct reclassification. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  3. Fire spread in chaparral – a comparison of laboratory data and model predictions in burning live fuels

    Science.gov (United States)

    David R. Weise; Eunmo Koo; Xiangyang Zhou; Shankar Mahalingam; Frédéric Morandini; Jacques-Henri Balbi

    2016-01-01

    Fire behaviour data from 240 laboratory fires in high-density live chaparral fuel beds were compared with model predictions. Logistic regression was used to develop a model to predict fire spread success in the fuel beds and linear regression was used to predict rate of spread. Predictions from the Rothermel equation and three proposed changes as well as two physically...

  4. Use of a mixture statistical model in studying malaria vectors density.

    Directory of Open Access Journals (Sweden)

    Olayidé Boussari

    Full Text Available Vector control is a major step in the process of malaria control and elimination. This requires vector counts and appropriate statistical analyses of these counts. However, vector counts are often overdispersed. A non-parametric mixture of Poisson model (NPMP is proposed to allow for overdispersion and better describe vector distribution. Mosquito collections using the Human Landing Catches as well as collection of environmental and climatic data were carried out from January to December 2009 in 28 villages in Southern Benin. A NPMP regression model with "village" as random effect is used to test statistical correlations between malaria vectors density and environmental and climatic factors. Furthermore, the villages were ranked using the latent classes derived from the NPMP model. Based on this classification of the villages, the impacts of four vector control strategies implemented in the villages were compared. Vector counts were highly variable and overdispersed with important proportion of zeros (75%. The NPMP model had a good aptitude to predict the observed values and showed that: i proximity to freshwater body, market gardening, and high levels of rain were associated with high vector density; ii water conveyance, cattle breeding, vegetation index were associated with low vector density. The 28 villages could then be ranked according to the mean vector number as estimated by the random part of the model after adjustment on all covariates. The NPMP model made it possible to describe the distribution of the vector across the study area. The villages were ranked according to the mean vector density after taking into account the most important covariates. This study demonstrates the necessity and possibility of adapting methods of vector counting and sampling to each setting.

  5. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  6. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  7. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  8. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  9. Social Inclusion Predicts Lower Blood Glucose and Low-Density Lipoproteins in Healthy Adults.

    Science.gov (United States)

    Floyd, Kory; Veksler, Alice E; McEwan, Bree; Hesse, Colin; Boren, Justin P; Dinsmore, Dana R; Pavlich, Corey A

    2017-08-01

    Loneliness has been shown to have direct effects on one's personal well-being. Specifically, a greater feeling of loneliness is associated with negative mental health outcomes, negative health behaviors, and an increased likelihood of premature mortality. Using the neuroendocrine hypothesis, we expected social inclusion to predict decreases in both blood glucose levels and low-density lipoproteins (LDLs) and increases in high-density lipoproteins (HDLs). Fifty-two healthy adults provided self-report data for social inclusion and blood samples for hematological tests. Results indicated that higher social inclusion predicted lower levels of blood glucose and LDL, but had no effect on HDL. Implications for theory and practice are discussed.

  10. Predicting soil particle density from clay and soil organic matter contents

    DEFF Research Database (Denmark)

    Schjønning, Per; McBride, R.A.; Keller, T.

    2017-01-01

    Soil particle density (Dp) is an important soil property for calculating soil porosity expressions. However, many studies assume a constant value, typically 2.65Mgm−3 for arable, mineral soils. Fewmodels exist for the prediction of Dp from soil organic matter (SOM) content. We hypothesized...

  11. Predictive power of theoretical modelling of the nuclear mean field: examples of improving predictive capacities

    Science.gov (United States)

    Dedes, I.; Dudek, J.

    2018-03-01

    We examine the effects of the parametric correlations on the predictive capacities of the theoretical modelling keeping in mind the nuclear structure applications. The main purpose of this work is to illustrate the method of establishing the presence and determining the form of parametric correlations within a model as well as an algorithm of elimination by substitution (see text) of parametric correlations. We examine the effects of the elimination of the parametric correlations on the stabilisation of the model predictions further and further away from the fitting zone. It follows that the choice of the physics case and the selection of the associated model are of secondary importance in this case. Under these circumstances we give priority to the relative simplicity of the underlying mathematical algorithm, provided the model is realistic. Following such criteria, we focus specifically on an important but relatively simple case of doubly magic spherical nuclei. To profit from the algorithmic simplicity we chose working with the phenomenological spherically symmetric Woods–Saxon mean-field. We employ two variants of the underlying Hamiltonian, the traditional one involving both the central and the spin orbit potential in the Woods–Saxon form and the more advanced version with the self-consistent density-dependent spin–orbit interaction. We compare the effects of eliminating of various types of correlations and discuss the improvement of the quality of predictions (‘predictive power’) under realistic parameter adjustment conditions.

  12. Evaluation of the Troxler Model 4640 Thin Lift Nuclear Density Gauge. Research report (Interim)

    International Nuclear Information System (INIS)

    Solaimanian, M.; Holmgreen, R.J.; Kennedy, T.W.

    1990-07-01

    The report describes the results of a research study to determine the effectiveness of the Troxler Model 4640 Thin Lift Nuclear Density Gauge. The densities obtained from cores and the nuclear density gauge from seven construction projects were compared. The projects were either newly constructed or under construction when the tests were performed. A linear regression technique was used to investigate how well the core densities could be predicted from nuclear densities. Correlation coefficients were determined to indicate the degree of correlation between the core and nuclear densities. Using a statistical analysis technique, the range of the mean difference between core and nuclear measurements was established for specified confidence levels for each project. Analysis of the data indicated that the accuracy of the gauge is material dependent. While relatively acceptable results were obtained with limestone mixtures, the gauge did not perform satisfactorily with mixtures containing siliceous aggregate

  13. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  14. Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation

    Science.gov (United States)

    Lambon Ralph, Matthew A.; Kempkes, Marleen; Cousins, James N.; Lewis, Penelope A.

    2016-01-01

    Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. SIGNIFICANCE STATEMENT Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. PMID:27030764

  15. Toxicity prediction of ionic liquids based on Daphnia magna by using density functional theory

    Science.gov (United States)

    Nu’aim, M. N.; Bustam, M. A.

    2018-04-01

    By using a model called density functional theory, the toxicity of ionic liquids can be predicted and forecast. It is a theory that allowing the researcher to have a substantial tool for computation of the quantum state of atoms, molecules and solids, and molecular dynamics which also known as computer simulation method. It can be done by using structural feature based quantum chemical reactivity descriptor. The identification of ionic liquids and its Log[EC50] data are from literature data that available in Ismail Hossain thesis entitled “Synthesis, Characterization and Quantitative Structure Toxicity Relationship of Imidazolium, Pyridinium and Ammonium Based Ionic Liquids”. Each cation and anion of the ionic liquids were optimized and calculated. The geometry optimization and calculation from the software, produce the value of highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO). From the value of HOMO and LUMO, the value for other toxicity descriptors were obtained according to their formulas. The toxicity descriptor that involves are electrophilicity index, HOMO, LUMO, energy gap, chemical potential, hardness and electronegativity. The interrelation between the descriptors are being determined by using a multiple linear regression (MLR). From this MLR, all descriptors being analyzed and the descriptors that are significant were chosen. In order to develop the finest model equation for toxicity prediction of ionic liquids, the selected descriptors that are significant were used. The validation of model equation was performed with the Log[EC50] data from the literature and the final model equation was developed. A bigger range of ionic liquids which nearly 108 of ionic liquids can be predicted from this model equation.

  16. Prediction of bending moment resistance of screw connected joints in plywood members using regression models and compare with that commercial medium density fiberboard (MDF and particleboard

    Directory of Open Access Journals (Sweden)

    Sadegh Maleki

    2014-11-01

    Full Text Available The study aimed at predicting bending moment resistance plywood of screw (coarse and fine threads joints using regression models. Thickness of the member was 19mm and compared with medium density fiberboard (MDF and particleboard with 18mm thicknesses. Two types of screws including coarse and fine thread drywall screw with nominal diameters of 6, 8 and 10mm and 3.5, 4 and 5 cm length respectively and sheet metal screw with diameters of 8 and 10 and length of 4 cm were used. The results of the study have shown that bending moment resistance of screw was increased by increasing of screws diameter and penetrating depth. Screw Length was found to have a larger influence on bending moment resistance than screw diameter. Bending moment resistance with coarse thread drywall screws was higher than those of fine thread drywall screws. The highest bending moment resistance (71.76 N.m was observed in joints made with coarse screw which were 5 mm in diameter and 28 mm in depth of penetration. The lowest bending moment resistance (12.08 N.m was observed in joints having fine screw with 3.5 mm diameter and 9 mm penetrations. Furthermore, bending moment resistance in plywood was higher than those of medium density fiberboard (MDF and particleboard. Finally, it has been found that the ultimate bending moment resistance of plywood joint can be predicted following formula Wc = 0.189×D0.726×P0.577 for coarse thread drywall screws and Wf = 0.086×D0.942×P0.704 for fine ones according to diameter and penetrating depth. The analysis of variance of the experimental and predicted data showed that the developed models provide a fair approximation of actual experimental measurements.

  17. A mathematical model of the maximum power density attainable in an alkaline hydrogen/oxygen fuel cell

    Science.gov (United States)

    Kimble, Michael C.; White, Ralph E.

    1991-01-01

    A mathematical model of a hydrogen/oxygen alkaline fuel cell is presented that can be used to predict the polarization behavior under various power loads. The major limitations to achieving high power densities are indicated and methods to increase the maximum attainable power density are suggested. The alkaline fuel cell model describes the phenomena occurring in the solid, liquid, and gaseous phases of the anode, separator, and cathode regions based on porous electrode theory applied to three phases. Fundamental equations of chemical engineering that describe conservation of mass and charge, species transport, and kinetic phenomena are used to develop the model by treating all phases as a homogeneous continuum.

  18. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  19. Empirical models for end-use properties prediction of LDPE: application in the flexible plastic packaging industry

    Directory of Open Access Journals (Sweden)

    Maria Carolina Burgos Costa

    2008-03-01

    Full Text Available The objective of this work is to develop empirical models to predict end use properties of low density polyethylene (LDPE resins as functions of two intrinsic properties easily measured in the polymers industry. The most important properties for application in the flexible plastic packaging industry were evaluated experimentally for seven commercial polymer grades. Statistical correlation analysis was performed for all variables and used as the basis for proper choice of inputs to each model output. Intrinsic properties selected for resin characterization are fluidity index (FI, which is essentially an indirect measurement of viscosity and weight average molecular weight (MW, and density. In general, models developed are able to reproduce and predict experimental data within experimental accuracy and show that a significant number of end use properties improve as the MW and density increase. Optical properties are mainly determined by the polymer morphology.

  20. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  1. Evaluation of Presumed Probability-Density-Function Models in Non-Premixed Flames by using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Cao Hong-Jun; Zhang Hui-Qiang; Lin Wen-Yi

    2012-01-01

    Four kinds of presumed probability-density-function (PDF) models for non-premixed turbulent combustion are evaluated in flames with various stoichiometric mixture fractions by using large eddy simulation (LES). The LES code is validated by the experimental data of a classical turbulent jet flame (Sandia flame D). The mean and rms temperatures obtained by the presumed PDF models are compared with the LES results. The β-function model achieves a good prediction for different flames. The predicted rms temperature by using the double-δ function model is very small and unphysical in the vicinity of the maximum mean temperature. The clip-Gaussian model and the multi-δ function model make a worse prediction of the extremely fuel-rich or fuel-lean side due to the clip at the boundary of the mixture fraction space. The results also show that the overall prediction performance of presumed PDF models is better at mediate stoichiometric mixture fractions than that at very small or very large ones. (fundamental areas of phenomenology(including applications))

  2. How can we model selectively neutral density dependence in evolutionary games.

    Science.gov (United States)

    Argasinski, Krzysztof; Kozłowski, Jan

    2008-03-01

    The problem of density dependence appears in all approaches to the modelling of population dynamics. It is pertinent to classic models (i.e., Lotka-Volterra's), and also population genetics and game theoretical models related to the replicator dynamics. There is no density dependence in the classic formulation of replicator dynamics, which means that population size may grow to infinity. Therefore the question arises: How is unlimited population growth suppressed in frequency-dependent models? Two categories of solutions can be found in the literature. In the first, replicator dynamics is independent of background fitness. In the second type of solution, a multiplicative suppression coefficient is used, as in a logistic equation. Both approaches have disadvantages. The first one is incompatible with the methods of life history theory and basic probabilistic intuitions. The logistic type of suppression of per capita growth rate stops trajectories of selection when population size reaches the maximal value (carrying capacity); hence this method does not satisfy selective neutrality. To overcome these difficulties, we must explicitly consider turn-over of individuals dependent on mortality rate. This new approach leads to two interesting predictions. First, the equilibrium value of population size is lower than carrying capacity and depends on the mortality rate. Second, although the phase portrait of selection trajectories is the same as in density-independent replicator dynamics, pace of selection slows down when population size approaches equilibrium, and then remains constant and dependent on the rate of turn-over of individuals.

  3. Early experiences building a software quality prediction model

    Science.gov (United States)

    Agresti, W. W.; Evanco, W. M.; Smith, M. C.

    1990-01-01

    Early experiences building a software quality prediction model are discussed. The overall research objective is to establish a capability to project a software system's quality from an analysis of its design. The technical approach is to build multivariate models for estimating reliability and maintainability. Data from 21 Ada subsystems were analyzed to test hypotheses about various design structures leading to failure-prone or unmaintainable systems. Current design variables highlight the interconnectivity and visibility of compilation units. Other model variables provide for the effects of reusability and software changes. Reported results are preliminary because additional project data is being obtained and new hypotheses are being developed and tested. Current multivariate regression models are encouraging, explaining 60 to 80 percent of the variation in error density of the subsystems.

  4. Adsorption of CH4 on nitrogen- and boron-containing carbon models of coal predicted by density-functional theory

    Science.gov (United States)

    Liu, Xiao-Qiang; Xue, Ying; Tian, Zhi-Yue; Mo, Jing-Jing; Qiu, Nian-Xiang; Chu, Wei; Xie, He-Ping

    2013-11-01

    Graphene doped by nitrogen (N) and/or boron (B) is used to represent the surface models of coal with the structural heterogeneity. Through the density functional theory (DFT) calculations, the interactions between coalbed methane (CBM) and coal surfaces have been investigated. Several adsorption sites and orientations of methane (CH4) on graphenes were systematically considered. Our calculations predicted adsorption energies of CH4 on graphenes of up to -0.179 eV, with the strongest binding mode in which three hydrogen atoms of CH4 direct to graphene surface, observed for N-doped graphene, compared to the perfect (-0.154 eV), B-doped (-0.150 eV), and NB-doped graphenes (-0.170 eV). Doping N in graphene increases the adsorption energies of CH4, but slightly reduced binding is found when graphene is doped by B. Our results indicate that all of graphenes act as the role of a weak electron acceptor with respect to CH4. The interactions between CH4 and graphenes are the physical adsorption and slightly depend upon the adsorption sites on graphenes and the orientations of methane as well as the electronegativity of dopant atoms in graphene.

  5. Linking removal targets to the ecological effects of invaders: a predictive model and field test.

    Science.gov (United States)

    Green, Stephanie J; Dulvy, Nicholas K; Brooks, Annabelle M L; Akins, John L; Cooper, Andrew B; Miller, Skylar; Côté, Isabelle M

    Species invasions have a range of negative effects on recipient ecosystems, and many occur at a scale and magnitude that preclude complete eradication. When complete extirpation is unlikely with available management resources, an effective strategy may be to suppress invasive populations below levels predicted to cause undesirable ecological change. We illustrated this approach by developing and testing targets for the control of invasive Indo-Pacific lionfish (Pterois volitans and P. miles) on Western Atlantic coral reefs. We first developed a size-structured simulation model of predation by lionfish on native fish communities, which we used to predict threshold densities of lionfish beyond which native fish biomass should decline. We then tested our predictions by experimentally manipulating lionfish densities above or below reef-specific thresholds, and monitoring the consequences for native fish populations on 24 Bahamian patch reefs over 18 months. We found that reducing lionfish below predicted threshold densities effectively protected native fish community biomass from predation-induced declines. Reductions in density of 25–92%, depending on the reef, were required to suppress lionfish below levels predicted to overconsume prey. On reefs where lionfish were kept below threshold densities, native prey fish biomass increased by 50–70%. Gains in small (15 cm total length), including ecologically important grazers and economically important fisheries species, had increased by 10–65% by the end of the experiment. Crucially, similar gains in prey fish biomass were realized on reefs subjected to partial and full removal of lionfish, but partial removals took 30% less time to implement. By contrast, the biomass of small native fishes declined by >50% on all reefs with lionfish densities exceeding reef-specific thresholds. Large inter-reef variation in the biomass of prey fishes at the outset of the study, which influences the threshold density of lionfish

  6. Model-compared RGU-photometric space-densities in the direction to M 5 (l = 40, b = +470)

    International Nuclear Information System (INIS)

    Fenkart, R.; Karaali, S.

    1990-01-01

    In the process of rounding off the results homogeneously obtained within the model-comparison phase of the Basle Halo Program, space densities of both photometric populations, I and II, have been derived, for late-type giants and for main-sequence stars with +3 m m , in a field close to the globular cluster M 5, according to the RGU-photometric Basle method. Compared to the density gradients predicted by the standard set of five multi-component models, used since the beginning of this phase, they confirm the existence of a Galactic Thick Disk component, in this direction, too

  7. A novel unified dislocation density-based model for hot deformation behavior of a nickel-based superalloy under dynamic recrystallization conditions

    International Nuclear Information System (INIS)

    Lin, Y.C.; Wen, Dong-Xu; Chen, Xiao-Min; Chen, Ming-Song

    2016-01-01

    In this study, a novel unified dislocation density-based model is presented for characterizing hot deformation behaviors in a nickel-based superalloy under dynamic recrystallization (DRX) conditions. In the Kocks-Mecking model, a new softening item is proposed to represent the impacts of DRX behavior on dislocation density evolution. The grain size evolution and DRX kinetics are incorporated into the developed model. Material parameters of the developed model are calibrated by a derivative-free method of MATLAB software. Comparisons between experimental and predicted results confirm that the developed unified dislocation density-based model can nicely reproduce hot deformation behavior, DRX kinetics, and grain size evolution in wide scope of initial grain size, strain rate, and deformation temperature. Moreover, the developed unified dislocation density-based model is well employed to analyze the time-variant forming processes of the studied superalloy. (orig.)

  8. A novel unified dislocation density-based model for hot deformation behavior of a nickel-based superalloy under dynamic recrystallization conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Y.C. [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); Light Alloy Research Institute of Central South University, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Wen, Dong-Xu; Chen, Xiao-Min [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); Chen, Ming-Song [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China)

    2016-09-15

    In this study, a novel unified dislocation density-based model is presented for characterizing hot deformation behaviors in a nickel-based superalloy under dynamic recrystallization (DRX) conditions. In the Kocks-Mecking model, a new softening item is proposed to represent the impacts of DRX behavior on dislocation density evolution. The grain size evolution and DRX kinetics are incorporated into the developed model. Material parameters of the developed model are calibrated by a derivative-free method of MATLAB software. Comparisons between experimental and predicted results confirm that the developed unified dislocation density-based model can nicely reproduce hot deformation behavior, DRX kinetics, and grain size evolution in wide scope of initial grain size, strain rate, and deformation temperature. Moreover, the developed unified dislocation density-based model is well employed to analyze the time-variant forming processes of the studied superalloy. (orig.)

  9. Coupled hygrothermal, electrochemical, and mechanical modelling for deterioration prediction in reinforced cementitious materials

    DEFF Research Database (Denmark)

    Michel, Alexander; Geiker, Mette Rica; Lepech, M.

    2017-01-01

    In this paper a coupled hygrothermal, electrochemical, and mechanical modelling approach for the deterioration prediction in cementitious materials is briefly outlined. Deterioration prediction is thereby based on coupled modelling of (i) chemical processes including among others transport of hea......, i.e. information, such as such as corrosion current density, damage state of concrete cover, etc., are constantly exchanged between the models....... and matter as well as phase assemblage on the nano and micro scale, (ii) corrosion of steel including electrochemical processes at the reinforcement surface, and (iii) material performance including corrosion- and load-induced damages on the meso and macro scale. The individual FEM models are fully coupled...

  10. Density-dependent electron transport and precise modeling of GaN high electron mobility transistors

    Energy Technology Data Exchange (ETDEWEB)

    Bajaj, Sanyam, E-mail: bajaj.10@osu.edu; Shoron, Omor F.; Park, Pil Sung; Krishnamoorthy, Sriram; Akyol, Fatih; Hung, Ting-Hsiang [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio 43210 (United States); Reza, Shahed; Chumbes, Eduardo M. [Raytheon Integrated Defense Systems, Andover, Massachusetts 01810 (United States); Khurgin, Jacob [Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218 (United States); Rajan, Siddharth [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio 43210 (United States); Department of Material Science and Engineering, The Ohio State University, Columbus, Ohio 43210 (United States)

    2015-10-12

    We report on the direct measurement of two-dimensional sheet charge density dependence of electron transport in AlGaN/GaN high electron mobility transistors (HEMTs). Pulsed IV measurements established increasing electron velocities with decreasing sheet charge densities, resulting in saturation velocity of 1.9 × 10{sup 7 }cm/s at a low sheet charge density of 7.8 × 10{sup 11 }cm{sup −2}. An optical phonon emission-based electron velocity model for GaN is also presented. It accommodates stimulated longitudinal optical (LO) phonon emission which clamps the electron velocity with strong electron-phonon interaction and long LO phonon lifetime in GaN. A comparison with the measured density-dependent saturation velocity shows that it captures the dependence rather well. Finally, the experimental result is applied in TCAD-based device simulator to predict DC and small signal characteristics of a reported GaN HEMT. Good agreement between the simulated and reported experimental results validated the measurement presented in this report and established accurate modeling of GaN HEMTs.

  11. Density-dependent electron transport and precise modeling of GaN high electron mobility transistors

    International Nuclear Information System (INIS)

    Bajaj, Sanyam; Shoron, Omor F.; Park, Pil Sung; Krishnamoorthy, Sriram; Akyol, Fatih; Hung, Ting-Hsiang; Reza, Shahed; Chumbes, Eduardo M.; Khurgin, Jacob; Rajan, Siddharth

    2015-01-01

    We report on the direct measurement of two-dimensional sheet charge density dependence of electron transport in AlGaN/GaN high electron mobility transistors (HEMTs). Pulsed IV measurements established increasing electron velocities with decreasing sheet charge densities, resulting in saturation velocity of 1.9 × 10 7  cm/s at a low sheet charge density of 7.8 × 10 11  cm −2 . An optical phonon emission-based electron velocity model for GaN is also presented. It accommodates stimulated longitudinal optical (LO) phonon emission which clamps the electron velocity with strong electron-phonon interaction and long LO phonon lifetime in GaN. A comparison with the measured density-dependent saturation velocity shows that it captures the dependence rather well. Finally, the experimental result is applied in TCAD-based device simulator to predict DC and small signal characteristics of a reported GaN HEMT. Good agreement between the simulated and reported experimental results validated the measurement presented in this report and established accurate modeling of GaN HEMTs

  12. Empirical model for the electron density peak height disturbance in response to solar wind conditions

    Science.gov (United States)

    Blanch, E.; Altadill, D.

    2009-04-01

    Geomagnetic storms disturb the quiet behaviour of the ionosphere, its electron density and the electron density peak height, hmF2. Many works have been done to predict the variations of the electron density but few efforts have been dedicated to predict the variations the hmF2 under disturbed helio-geomagnetic conditions. We present the results of the analyses of the F2 layer peak height disturbances occurred during intense geomagnetic storms for one solar cycle. The results systematically show a significant peak height increase about 2 hours after the beginning of the main phase of the geomagnetic storm, independently of both the local time position of the station at the onset of the storm and the intensity of the storm. An additional uplift is observed in the post sunset sector. The duration of the uplift and the height increase are dependent of the intensity of the geomagnetic storm, the season and the local time position of the station at the onset of the storm. An empirical model has been developed to predict the electron density peak height disturbances in response to solar wind conditions and local time which can be used for nowcasting and forecasting the hmF2 disturbances for the middle latitude ionosphere. This being an important output for EURIPOS project operational purposes.

  13. Effect of turbulence models on predicting convective heat transfer to hydrocarbon fuel at supercritical pressure

    Directory of Open Access Journals (Sweden)

    Tao Zhi

    2016-10-01

    Full Text Available A variety of turbulence models were used to perform numerical simulations of heat transfer for hydrocarbon fuel flowing upward and downward through uniformly heated vertical pipes at supercritical pressure. Inlet temperatures varied from 373 K to 663 K, with heat flux ranging from 300 kW/m2 to 550 kW/m2. Comparative analyses between predicted and experimental results were used to evaluate the ability of turbulence models to respond to variable thermophysical properties of hydrocarbon fuel at supercritical pressure. It was found that the prediction performance of turbulence models is mainly determined by the damping function, which enables them to respond differently to local flow conditions. Although prediction accuracy for experimental results varied from condition to condition, the shear stress transport (SST and launder and sharma models performed better than all other models used in the study. For very small buoyancy-influenced runs, the thermal-induced acceleration due to variations in density lead to the impairment of heat transfer occurring in the vicinity of pseudo-critical points, and heat transfer was enhanced at higher temperatures through the combined action of four thermophysical properties: density, viscosity, thermal conductivity and specific heat. For very large buoyancy-influenced runs, the thermal-induced acceleration effect was over predicted by the LS and AB models.

  14. Forecasting the density of oil futures returns using model-free implied volatility and high-frequency data

    International Nuclear Information System (INIS)

    Ielpo, Florian; Sevi, Benoit

    2013-09-01

    Forecasting the density of returns is useful for many purposes in finance, such as risk management activities, portfolio choice or derivative security pricing. Existing methods to forecast the density of returns either use prices of the asset of interest or option prices on this same asset. The latter method needs to convert the risk-neutral estimate of the density into a physical measure, which is computationally cumbersome. In this paper, we take the view of a practitioner who observes the implied volatility under the form of an index, namely the recent OVX, to forecast the density of oil futures returns for horizons going from 1 to 60 days. Using the recent methodology in Maheu and McCurdy (2011) to compute density predictions, we compare the performance of time series models using implied volatility and either daily or intra-daily futures prices. Our results indicate that models based on implied volatility deliver significantly better density forecasts at all horizons, which is in line with numerous studies delivering the same evidence for volatility point forecast. (authors)

  15. Osteoprotegerin autoantibodies do not predict low bone mineral density in middle-aged women.

    Science.gov (United States)

    Vaziri-Sani, Fariba; Brundin, Charlotte; Agardh, Daniel

    2017-12-01

    Autoantibodies against osteoprotegerin (OPG) have been associated with osteoporosis. The aim was to develop an immunoassay for OPG autoantibodies and test their diagnostic usefulness of identifying women general population with low bone mineral density. Included were 698 women at mean age 55.1 years (range 50.4-60.6) randomly selected from the general population. Measurement of wrist bone mineral density (g/cm 2 ) was performed of the non-dominant wrist by dual-energy X-ray absorptiometry (DXA). A T-score density. Measurements of OPG autoantibodies were carried by radiobinding assays. Cut-off levels for a positive value were determined from the deviation from normality in the distribution of 398 healthy blood donors representing the 99.7th percentile. Forty-five of the 698 (6.6%) women were IgG-OPG positive compared with 2 of 398 (0.5%) controls ( p  density between IgG-OPG positive (median 0.439 (range 0.315-0.547) g/cm 2 ) women and IgG-OPG negative (median 0.435 (range 0.176-0.652) g/cm 2 ) women ( p  = 0.3956). Furthermore, there was neither a correlation between IgG-OPG levels and bone mineral density (r s  = 0.1896; p  = 0.2068) nor T-score (r s  = 0.1889; p  = 0.2086). Diagnostic sensitivity and specificity of IgG-OPG for low bone mineral density were 5.7% and 92.9%, and positive and negative predictive values were 7.4% and 90.8%, respectively. Elevated OPG autoantibody levels do not predict low bone mineral density in middle-aged women selected from the general population.

  16. Predicting available water of soil from particle-size distribution and bulk density in an oasis-desert transect in northwestern China

    Science.gov (United States)

    Li, Danfeng; Gao, Guangyao; Shao, Ming'an; Fu, Bojie

    2016-07-01

    A detailed understanding of soil hydraulic properties, particularly the available water content of soil, (AW, cm3 cm-3), is required for optimal water management. Direct measurement of soil hydraulic properties is impractical for large scale application, but routinely available soil particle-size distribution (PSD) and bulk density can be used as proxies to develop various prediction functions. In this study, we compared the performance of the Arya and Paris (AP) model, Mohammadi and Vanclooster (MV) model, Arya and Heitman (AH) model, and Rosetta program in predicting the soil water characteristic curve (SWCC) at 34 points with experimental SWCC data in an oasis-desert transect (20 × 5 km) in the middle reaches of the Heihe River basin, northwestern China. The idea of the three models emerges from the similarity of the shapes of the PSD and SWCC. The AP model, MV model, and Rosetta program performed better in predicting the SWCC than the AH model. The AW determined from the SWCCs predicted by the MV model agreed better with the experimental values than those derived from the AP model and Rosetta program. The fine-textured soils were characterized by higher AW values, while the sandy soils had lower AW values. The MV model has the advantages of having robust physical basis, being independent of database-related parameters, and involving subclasses of texture data. These features make it promising in predicting soil water retention at regional scales, serving for the application of hydrological models and the optimization of soil water management.

  17. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  18. A multicomponent multiphase lattice Boltzmann model with large liquid–gas density ratios for simulations of wetting phenomena

    International Nuclear Information System (INIS)

    Zhang Qing-Yu; Zhu Ming-Fang; Sun Dong-Ke

    2017-01-01

    A multicomponent multiphase (MCMP) pseudopotential lattice Boltzmann (LB) model with large liquid–gas density ratios is proposed for simulating the wetting phenomena. In the proposed model, two layers of neighboring nodes are adopted to calculate the fluid–fluid cohesion force with higher isotropy order. In addition, the different-time-step method is employed to calculate the processes of particle propagation and collision for the two fluid components with a large pseudo-particle mass contrast. It is found that the spurious current is remarkably reduced by employing the higher isotropy order calculation of the fluid–fluid cohesion force. The maximum spurious current appearing at the phase interfaces is evidently influenced by the magnitudes of fluid–fluid and fluid–solid interaction strengths, but weakly affected by the time step ratio. The density ratio analyses show that the liquid–gas density ratio is dependent on both the fluid–fluid interaction strength and the time step ratio. For the liquid–gas flow simulations without solid phase, the maximum liquid–gas density ratio achieved by the present model is higher than 1000:1. However, the obtainable maximum liquid–gas density ratio in the solid–liquid–gas system is lower. Wetting phenomena of droplets contacting smooth/rough solid surfaces and the dynamic process of liquid movement in a capillary tube are simulated to validate the proposed model in different solid–liquid–gas coexisting systems. It is shown that the simulated intrinsic contact angles of droplets on smooth surfaces are in good agreement with those predicted by the constructed LB formula that is related to Young’s equation. The apparent contact angles of droplets on rough surfaces compare reasonably well with the predictions of Cassie’s law. For the simulation of liquid movement in a capillary tube, the linear relation between the liquid–gas interface position and simulation time is observed, which is identical to

  19. Knowledge-based artificial neural network model to predict the properties of alpha+ beta titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Banu, P. S. Noori; Rani, S. Devaki [Dept. of Metallurgical Engineering, Jawaharlal Nehru Technological University, HyderabadI (India)

    2016-08-15

    In view of emerging applications of alpha+beta titanium alloys in aerospace and defense, we have aimed to develop a Back propagation neural network (BPNN) model capable of predicting the properties of these alloys as functions of alloy composition and/or thermomechanical processing parameters. The optimized BPNN model architecture was based on the sigmoid transfer function and has one hidden layer with ten nodes. The BPNN model showed excellent predictability of five properties: Tensile strength (r: 0.96), yield strength (r: 0.93), beta transus (r: 0.96), specific heat capacity (r: 1.00) and density (r: 0.99). The developed BPNN model was in agreement with the experimental data in demonstrating the individual effects of alloying elements in modulating the above properties. This model can serve as the platform for the design and development of new alpha+beta titanium alloys in order to attain desired strength, density and specific heat capacity.

  20. Models for Experimental High Density Housing

    Science.gov (United States)

    Bradecki, Tomasz; Swoboda, Julia; Nowak, Katarzyna; Dziechciarz, Klaudia

    2017-10-01

    The article presents the effects of research on models of high density housing. The authors present urban projects for experimental high density housing estates. The design was based on research performed on 38 examples of similar housing in Poland that have been built after 2003. Some of the case studies show extreme density and that inspired the researchers to test individual virtual solutions that would answer the question: How far can we push the limits? The experimental housing projects show strengths and weaknesses of design driven only by such indexes as FAR (floor attenuation ratio - housing density) and DPH (dwellings per hectare). Although such projects are implemented, the authors believe that there are reasons for limits since high index values may be in contradiction to the optimum character of housing environment. Virtual models on virtual plots presented by the authors were oriented toward maximising the DPH index and DAI (dwellings area index) which is very often the main driver for developers. The authors also raise the question of sustainability of such solutions. The research was carried out in the URBAN model research group (Gliwice, Poland) that consists of academic researchers and architecture students. The models reflect architectural and urban regulations that are valid in Poland. Conclusions might be helpful for urban planners, urban designers, developers, architects and architecture students.

  1. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    International Nuclear Information System (INIS)

    Vasina, P; Hytkova, T; Elias, M

    2009-01-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  2. Calibration plots for risk prediction models in the presence of competing risks.

    Science.gov (United States)

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Topside electron density at low latitudes

    International Nuclear Information System (INIS)

    Ezquer, R.G.; Cabrera, M.A.; Flores, R.F.; Mosert, M.

    2002-01-01

    The validity of IRI to predict the electron density at the topside electron density profile over the low latitude region is checked. The comparison with measurements obtained with the Taiyo satellite during low solar activity shows that, the disagreement between prediction and measurement is lower than 40% for 70% of considered cases. These IRI predictions are better than those obtained in a previous work at the southern peak of the equatorial anomaly for high solar activity. Additional studies for low solar activity, using ionosonde data as input parameters in the model, are needed in order to check if the observed deviations are due to the predicted peak characteristics or to the predicted shape of the topside profile. (author)

  4. Integrated predictive modeling simulations of the Mega-Amp Spherical Tokamak

    International Nuclear Information System (INIS)

    Nguyen, Canh N.; Bateman, Glenn; Kritz, Arnold H.; Akers, Robert; Byrom, Calum; Sykes, Alan

    2002-01-01

    Integrated predictive modeling simulations are carried out using the BALDUR transport code [Singer et al., Comput. Phys. Commun. 49, 275 (1982)] for high confinement mode (H-mode) and low confinement mode (L-mode) discharges in the Mega-Amp Spherical Tokamak (MAST) [Sykes et al., Phys. Plasmas 8, 2101 (2001)]. Simulation results, obtained using either the Multi-Mode transport model (MMM95) or, alternatively, the mixed-Bohm/gyro-Bohm transport model, are compared with experimental data. In addition to the anomalous transport, neoclassical transport is included in the simulations and the ion thermal diffusivity in the inner third of the plasma is found to be predominantly neoclassical. The sawtooth oscillations in the simulations radially spread the neutral beam injection heating profiles across a broad sawtooth mixing region. The broad sawtooth oscillations also flatten the central temperature and electron density profiles. Simulation results for the electron temperature and density profiles are compared with experimental data to test the applicability of these models and the BALDUR integrated modeling code in the limit of low aspect ratio toroidal plasmas

  5. Prediction models for density and viscosity of biodiesel and their effects on fuel supply system in CI engines

    Energy Technology Data Exchange (ETDEWEB)

    Tesfa, B.; Mishra, R.; Gu, F. [Computing and Engineering, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH (United Kingdom); Powles, N. [Chemistry and Forensic Science, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH (United Kingdom)

    2010-12-15

    Biodiesel is a promising non-toxic and biodegradable alternative fuel used in the transport sector. Nevertheless, the higher viscosity and density of biodiesel poses some acute problems when it is used it in unmodified engine. Taking this into consideration, this study has been focused towards two objectives. The first objective is to identify the effect of temperature on density and viscosity for a variety of biodiesels and also to develop a correlation between density and viscosity for these biodiesels. The second objective is to investigate and quantify the effects of density and viscosity of the biodiesels and their blends on various components of the engine fuel supply system such as fuel pump, fuel filters and fuel injector. To achieve first objective density and viscosity of rapeseed oil biodiesel, corn oil biodiesel and waste oil biodiesel blends (0B, 5B, 10B, 20B, 50B, 75B, and 100B) were tested at different temperatures using EN ISO 3675:1998 and EN ISO 3104:1996 standards. For both density and viscosity new correlations were developed and compared with published literature. A new correlation between biodiesel density and biodiesel viscosity was also developed. The second objective was achieved by using analytical models showing the effects of density and viscosity on the performance of fuel supply system. These effects were quantified over a wide range of engine operating conditions. It can be seen that the higher density and viscosity of biodiesel have a significant impact on the performance of fuel pumps and fuel filters as well as on air-fuel mixing behaviour of compression ignition (CI) engine. (author)

  6. A new model for prediction of dispersoid precipitation in aluminium alloys containing zirconium and scandium

    International Nuclear Information System (INIS)

    Robson, J.D.

    2004-01-01

    A model has been developed to predict precipitation of ternary Al 3 (Sc, Zr) dispersoids in aluminium alloys containing zirconium and scandium. The model is based on the classical numerical method of Kampmann and Wagner, extended to predict precipitation of a ternary phase. The model has been applied to the precipitation of dispersoids in scandium containing AA7050. The dispersoid precipitation kinetics and number density are predicted to be sensitive to the scandium concentration, whilst the dispersoid radius is not. The dispersoids are predicted to enrich in zirconium during precipitation. Coarsening has been investigated in detail and it has been predicted that a steady-state size distribution is only reached once coarsening is well advanced. The addition of scandium is predicted to eliminate the dispersoid free zones observed in scandium free 7050, greatly increasing recrystallization resistance

  7. A note on the conditional density estimate in single functional index model

    OpenAIRE

    2010-01-01

    Abstract In this paper, we consider estimation of the conditional density of a scalar response variable Y given a Hilbertian random variable X when the observations are linked with a single-index structure. We establish the pointwise and the uniform almost complete convergence (with the rate) of the kernel estimate of this model. As an application, we show how our result can be applied in the prediction problem via the conditional mode estimate. Finally, the estimation of the funct...

  8. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  9. Prediction of overpotential and effective thickness of Ni/YSZ anode for solid oxide fuel cell by improved species territory adsorption model

    Science.gov (United States)

    Nagasawa, Tsuyoshi; Hanamura, Katsunori

    2017-06-01

    The reliability of analytical model for hydrogen oxidation at Ni/YSZ anode in solid oxide fuel cell named as species territory adsorption model has been improved by introducing referenced thermodynamic and kinetic parameters predicted by density function theory calculations. The model can explicitly predict anode overpotential using unknown values of quantities of state for oxygen migration process in YSZ near a triple phase boundary (TPB), frequency factor for hydrogen oxidation, and effective anode thickness. The former two are determined through careful fitting process between the predicted and experimental results of Ni/YSZ cermet and Ni-patterned anodes. This makes it possible to estimate effective anode thickness, which tends to increase with temperature in six kinds of Ni/YSZ anodes in references. In addition, the comparison between the proposed model and a published numerical simulation indicates that the model can predict more precise dependence of anode overpotential on steam partial pressure than that by Butler-Volmer equation with empirical exchange current density. The introduction of present model into numerical simulation instead of Butler-Volmer equation can give more accurate prediction of anode polarization.

  10. Improving the description of collective effects within the combinatorial model of nuclear level densities

    International Nuclear Information System (INIS)

    Hilaire, S.; Girod, M.; Goriely, S.

    2011-01-01

    The combinatorial model of nuclear level densities has now reached a level of accuracy comparable to that of the best global analytical expressions without suffering from the limits imposed by the statistical hypothesis on which the latter expressions rely. In particular, it provides naturally, non Gaussian spin distribution as well as non equipartition of parities which are known to have a significant impact on cross section predictions at low energies. Our first global model developed in Ref. 1 suffered from deficiencies, in particular in the way the collective effects - both vibrational and rotational - were treated. We have recently improved this treatment using simultaneously the single particle levels and collective properties predicted by a newly derived Gogny interaction, therefore enabling a microscopic description of energy-dependent shell, pairing and deformation effects. In addition, for deformed nuclei, the transition to sphericity is coherently taken into account on the basis of a temperature-dependent Hartree-Fock calculation which provides at each temperature the structure properties needed to build the level densities. This new method is described and shown to give promising preliminary results with respect to available experimental data. (authors)

  11. Urbanization impacts on mammals across urban-forest edges and a predictive model of edge effects.

    Science.gov (United States)

    Villaseñor, Nélida R; Driscoll, Don A; Escobar, Martín A H; Gibbons, Philip; Lindenmayer, David B

    2014-01-01

    With accelerating rates of urbanization worldwide, a better understanding of ecological processes at the wildland-urban interface is critical to conserve biodiversity. We explored the effects of high and low-density housing developments on forest-dwelling mammals. Based on habitat characteristics, we expected a gradual decline in species abundance across forest-urban edges and an increased decline rate in higher contrast edges. We surveyed arboreal mammals in sites of high and low housing density along 600 m transects that spanned urban areas and areas turn on adjacent native forest. We also surveyed forest controls to test whether edge effects extended beyond our edge transects. We fitted models describing richness, total abundance and individual species abundance. Low-density housing developments provided suitable habitat for most arboreal mammals. In contrast, high-density housing developments had lower species richness, total abundance and individual species abundance, but supported the highest abundances of an urban adapter (Trichosurus vulpecula). We did not find the predicted gradual decline in species abundance. Of four species analysed, three exhibited no response to the proximity of urban boundaries, but spilled over into adjacent urban habitat to differing extents. One species (Petaurus australis) had an extended negative response to urban boundaries, suggesting that urban development has impacts beyond 300 m into adjacent forest. Our empirical work demonstrates that high-density housing developments have negative effects on both community and species level responses, except for one urban adapter. We developed a new predictive model of edge effects based on our results and the literature. To predict animal responses across edges, our framework integrates for first time: (1) habitat quality/preference, (2) species response with the proximity to the adjacent habitat, and (3) spillover extent/sensitivity to adjacent habitat boundaries. This framework will

  12. Expected packing density allows prediction of both amyloidogenic and disordered regions in protein chains

    Energy Technology Data Exchange (ETDEWEB)

    Galzitskaya, Oxana V; Garbuzynskiy, Sergiy O; Lobanov, Michail Yu [Institute of Protein Research, Russian Academy of Sciences, 142290, Pushchino, Moscow Region (Russian Federation)

    2007-07-18

    The determination of factors that influence conformational changes in proteins is very important for the identification of potentially amyloidogenic and disordered regions in polypeptide chains. In our work we introduce a new parameter, mean packing density, to detect both amyloidogenic and disordered regions in a protein sequence. It has been shown that regions with strong expected packing density are responsible for amyloid formation. Our predictions are consistent with known disease-related amyloidogenic regions for 9 of 12 amyloid-forming proteins and peptides in which the positions of amyloidogenic regions have been revealed experimentally. Our findings support the concept that the mechanism of formation of amyloid fibrils is similar for different peptides and proteins. Moreover, we have demonstrated that regions with weak expected packing density are responsible for the appearance of disordered regions. Our method has been tested on datasets of globular proteins and long disordered protein segments, and it shows improved performance over other widely used methods. Thus, we demonstrate that the expected packing density is a useful value for predicting both disordered and amyloidogenic regions of a protein based on sequence alone. Our results are important for understanding the structural characteristics of protein folding and misfolding.

  13. Fluid and gyrokinetic modelling of particle transport in plasmas with hollow density profiles

    International Nuclear Information System (INIS)

    Tegnered, D; Oberparleiter, M; Nordman, H; Strand, P

    2016-01-01

    Hollow density profiles occur in connection with pellet fuelling and L to H transitions. A positive density gradient could potentially stabilize the turbulence or change the relation between convective and diffusive fluxes, thereby reducing the turbulent transport of particles towards the center, making the fuelling scheme inefficient. In the present work, the particle transport driven by ITG/TE mode turbulence in regions of hollow density profiles is studied by fluid as well as gyrokinetic simulations. The fluid model used, an extended version of the Weiland transport model, Extended Drift Wave Model (EDWM), incorporates an arbitrary number of ion species in a multi-fluid description, and an extended wavelength spectrum. The fluid model, which is fast and hence suitable for use in predictive simulations, is compared to gyrokinetic simulations using the code GENE. Typical tokamak parameters are used based on the Cyclone Base Case. Parameter scans in key plasma parameters like plasma β, R/L T , and magnetic shear are investigated. It is found that β in particular has a stabilizing effect in the negative R/L n region, both nonlinear GENE and EDWM show a decrease in inward flux for negative R/L n and a change of direction from inward to outward for positive R/L n . This might have serious consequences for pellet fuelling of high β plasmas. (paper)

  14. Information density converges in dialogue: Towards an information-theoretic model.

    Science.gov (United States)

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Void fraction prediction in two-phase flows independent of the liquid phase density changes

    International Nuclear Information System (INIS)

    Nazemi, E.; Feghhi, S.A.H.; Roshani, G.H.

    2014-01-01

    Gamma-ray densitometry is a frequently used non-invasive method to determine void fraction in two-phase gas liquid pipe flows. Performance of flow meters using gamma-ray attenuation depends strongly on the fluid properties. Variations of the fluid properties such as density in situations where temperature and pressure fluctuate would cause significant errors in determination of the void fraction in two-phase flows. A conventional solution overcoming such an obstacle is periodical recalibration which is a difficult task. This paper presents a method based on dual modality densitometry using Artificial Neural Network (ANN), which offers the advantage of measuring the void fraction independent of the liquid phase changes. An experimental setup was implemented to generate the required input data for training the network. ANNs were trained on the registered counts of the transmission and scattering detectors in different liquid phase densities and void fractions. Void fractions were predicted by ANNs with mean relative error of less than 0.45% in density variations range of 0.735 up to 0.98 gcm −3 . Applying this method would improve the performance of two-phase flow meters and eliminates the necessity of periodical recalibration. - Highlights: • Void fraction was predicted independent of density changes. • Recorded counts of detectors/void fraction were used as inputs/output of ANN. • ANN eliminated necessity of recalibration in changeable density of two-phase flows

  16. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  17. The prediction of cyclic proximal humerus fracture fixation failure by various bone density measures.

    Science.gov (United States)

    Varga, Peter; Grünwald, Leonard; Windolf, Markus

    2018-02-22

    Fixation of osteoporotic proximal humerus fractures has remained challenging, but may be improved by careful pre-operative planning. The aim of this study was to investigate how well the failure of locking plate fixation of osteoporotic proximal humerus fractures can be predicted by bone density measures assessed with currently available clinical imaging (realistic case) and a higher resolution and quality modality (theoretical best-case). Various density measures were correlated to experimentally assessed number of cycles to construct failure of plated unstable low-density proximal humerus fractures (N = 18). The influence of density evaluation technique was investigated by comparing local (peri-implant) versus global evaluation regions; HR-pQCT-based versus clinical QCT-based image data; ipsilateral versus contralateral side; and bone mineral content (BMC) versus bone mineral density (BMD). All investigated density measures were significantly correlated with the experimental cycles to failure. The best performing clinically feasible parameter was the QCT-based BMC of the contralateral articular cap region, providing significantly better correlation (R 2  = 0.53) compared to a previously proposed clinical density measure (R 2  = 0.30). BMC had consistently, but not significantly stronger correlations with failure than BMD. The overall best results were obtained with the ipsilateral HR-pQCT-based local BMC (R 2  = 0.74) that may be used for implant optimization. Strong correlations were found between the corresponding density measures of the two CT image sources, as well as between the two sides. Future studies should investigate if BMC of the contralateral articular cap region could provide improved prediction of clinical fixation failure compared to previously proposed measures. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  18. Modelling of density limit phenomena in toroidal helical plasmas

    International Nuclear Information System (INIS)

    Itoh, Kimitaka; Itoh, Sanae-I.

    2001-01-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the Wendelstein 7-AS (W7-AS) stellarator. (author)

  19. Modelling of density limit phenomena in toroidal helical plasmas

    International Nuclear Information System (INIS)

    Itoh, K.; Itoh, S.-I.

    2000-03-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the W7-AS stellarator. (author)

  20. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  1. Predictive integrated modelling for ITER scenarios

    International Nuclear Information System (INIS)

    Artaud, J.F.; Imbeaux, F.; Aniel, T.; Basiuk, V.; Eriksson, L.G.; Giruzzi, G.; Hoang, G.T.; Huysmans, G.; Joffrin, E.; Peysson, Y.; Schneider, M.; Thomas, P.

    2005-01-01

    The uncertainty on the prediction of ITER scenarios is evaluated. 2 transport models which have been extensively validated against the multi-machine database are used for the computation of the transport coefficients. The first model is GLF23, the second called Kiauto is a model in which the profile of dilution coefficient is a gyro Bohm-like analytical function, renormalized in order to get profiles consistent with a given global energy confinement scaling. The package of codes CRONOS is used, it gives access to the dynamics of the discharge and allows the study of interplay between heat transport, current diffusion and sources. The main motivation of this work is to study the influence of parameters such plasma current, heat, density, impurities and toroidal moment transport. We can draw the following conclusions: 1) the target Q = 10 can be obtained in ITER hybrid scenario at I p = 13 MA, using either the DS03 two terms scaling or the GLF23 model based on the same pedestal; 2) I p = 11.3 MA, Q = 10 can be reached only assuming a very peaked pressure profile and a low pedestal; 3) at fixed Greenwald fraction, Q increases with density peaking; 4) achieving a stationary q-profile with q > 1 requires a large non-inductive current fraction (80%) that could be provided by 20 to 40 MW of LHCD; and 5) owing to the high temperature the q-profile penetration is delayed and q = 1 is reached about 600 s in ITER hybrid scenario at I p = 13 MA, in the absence of active q-profile control. (A.C.)

  2. Anopheles atroparvus density modeling using MODIS NDVI in a former malarious area in Portugal.

    Science.gov (United States)

    Lourenço, Pedro M; Sousa, Carla A; Seixas, Júlia; Lopes, Pedro; Novo, Maria T; Almeida, A Paulo G

    2011-12-01

    Malaria is dependent on environmental factors and considered as potentially re-emerging in temperate regions. Remote sensing data have been used successfully for monitoring environmental conditions that influence the patterns of such arthropod vector-borne diseases. Anopheles atroparvus density data were collected from 2002 to 2005, on a bimonthly basis, at three sites in a former malarial area in Southern Portugal. The development of the Remote Vector Model (RVM) was based upon two main variables: temperature and the Normalized Differential Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite. Temperature influences the mosquito life cycle and affects its intra-annual prevalence, and MODIS NDVI was used as a proxy for suitable habitat conditions. Mosquito data were used for calibration and validation of the model. For areas with high mosquito density, the model validation demonstrated a Pearson correlation of 0.68 (pNDVI. RVM is a satellite data-based assimilation algorithm that uses temperature fields to predict the intra- and inter-annual densities of this mosquito species using MODIS NDVI. RVM is a relevant tool for vector density estimation, contributing to the risk assessment of transmission of mosquito-borne diseases and can be part of the early warning system and contingency plans providing support to the decision making process of relevant authorities. © 2011 The Society for Vector Ecology.

  3. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi

    2015-10-21

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  4. Collective estimation of multiple bivariate density functions with application to angular-sampling-based protein loop modeling

    KAUST Repository

    Maadooliat, Mehdi; Zhou, Lan; Najibi, Seyed Morteza; Gao, Xin; Huang, Jianhua Z.

    2015-01-01

    This paper develops a method for simultaneous estimation of density functions for a collection of populations of protein backbone angle pairs using a data-driven, shared basis that is constructed by bivariate spline functions defined on a triangulation of the bivariate domain. The circular nature of angular data is taken into account by imposing appropriate smoothness constraints across boundaries of the triangles. Maximum penalized likelihood is used to fit the model and an alternating blockwise Newton-type algorithm is developed for computation. A simulation study shows that the collective estimation approach is statistically more efficient than estimating the densities individually. The proposed method was used to estimate neighbor-dependent distributions of protein backbone dihedral angles (i.e., Ramachandran distributions). The estimated distributions were applied to protein loop modeling, one of the most challenging open problems in protein structure prediction, by feeding them into an angular-sampling-based loop structure prediction framework. Our estimated distributions compared favorably to the Ramachandran distributions estimated by fitting a hierarchical Dirichlet process model; and in particular, our distributions showed significant improvements on the hard cases where existing methods do not work well.

  5. External intermittency prediction using AMR solutions of RANS turbulence and transported PDF models

    Science.gov (United States)

    Olivieri, D. A.; Fairweather, M.; Falle, S. A. E. G.

    2011-12-01

    External intermittency in turbulent round jets is predicted using a Reynolds-averaged Navier-Stokes modelling approach coupled to solutions of the transported probability density function (pdf) equation for scalar variables. Solutions to the descriptive equations are obtained using a finite-volume method, combined with an adaptive mesh refinement algorithm, applied in both physical and compositional space. This method contrasts with conventional approaches to solving the transported pdf equation which generally employ Monte Carlo techniques. Intermittency-modified eddy viscosity and second-moment turbulence closures are used to accommodate the effects of intermittency on the flow field, with the influence of intermittency also included, through modifications to the mixing model, in the transported pdf equation. Predictions of the overall model are compared with experimental data on the velocity and scalar fields in a round jet, as well as against measurements of intermittency profiles and scalar pdfs in a number of flows, with good agreement obtained. For the cases considered, predictions based on the second-moment turbulence closure are clearly superior, although both turbulence models give realistic predictions of the bimodal scalar pdfs observed experimentally.

  6. Bulk Density Prediction for Histosols and Soil Horizons with High Organic Matter Content

    Directory of Open Access Journals (Sweden)

    Sidinei Julio Beutler

    Full Text Available ABSTRACT Bulk density (Bd can easily be predicted from other data using pedotransfer functions (PTF. The present study developed two PTFs (PTF1 and PTF2 for Bd prediction in Brazilian organic soils and horizons and compared their performance with nine previously published equations. Samples of 280 organic soil horizons used to develop PTFs and containing at least 80 g kg-1 total carbon content (TOC were obtained from different regions of Brazil. The multiple linear stepwise regression technique was applied to validate all the equations using an independent data set. Data were transformed using Box-Cox to meet the assumptions of the regression models. For validation of PTF1 and PTF2, the coefficient of determination (R2 was 0.47 and 0.37, mean error -0.04 and 0.10, and root mean square error 0.22 and 0.26, respectively. The best performance was obtained for the PTF1, PTF2, Hollis, and Honeysett equations. The PTF1 equation is recommended when clay content data are available, but considering that they are scarce for organic soils, the PTF2, Hollis, and Honeysett equations are the most suitable because they use TOC as a predictor variable. Considering the particular characteristics of organic soils and the environmental context in which they are formed, the equations developed showed good accuracy in predicting Bd compared with already existing equations.

  7. Thermophysical properties of liquid UO2, ZrO2 and corium by molecular dynamics and predictive models

    International Nuclear Information System (INIS)

    Kim, Woong Kee; Shim, Ji Hoon; Kaviany Massoud

    2016-01-01

    The analysis of such accidents (fate of the melt), requires accurate corium thermophysical properties data up to 5000 K. In addition, the initial corium melt superheat melt, determined from such properties, are key in predicting the fuel-coolant interactions (FCIs) and convection and retention of corium in accident scenarios, e.g., core-melt down corium discharge from reactor pressure vessels and spreading in external core-catcher. Due to the high temperatures, data on molten corium and its constituents are limited, so there are much data scatters and mostly extrapolations (even from solid state) have been used. Here we predict the thermophysical properties of molten UO 2 and ZrO 2 using classical molecular dynamics (MD) simulations (properties of corium are predicted using the mixture theories and UO 2 and ZrO 2 properties). The thermophysical properties (density, compressibility, heat capacity, viscosity and surface tension) of liquid UO 2 and ZrO 2 are predicted using classical molecular dynamics simulations, up to 5000 K. For atomic interactions, the CRG and the Teter potential models are found most appropriate. The liquid behavior is verified with the random motion of the constituent atoms and the pair-distribution functions, starting with the solid phase and raising the temperature to realize liquid phase. The viscosity and thermal conductivity are calculated with the Green-Kubo autocorrelation decay formulae and compared with the predictive models of Andrade and Bridgman. For liquid UO 2 , the CRG model gives satisfactory MD predictions. For ZrO 2 , the density is reliably predicted with the CRG potential model, while the compressibility and viscosity are more accurately predicted by the Teter model

  8. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  9. Comparison of the multicomponent mass transfer models for the prediction of the concentration overpotential for solid oxide fuel cell anodes

    Energy Technology Data Exchange (ETDEWEB)

    Vural, Yasemin; Ma, Lin; Ingham, Derek B.; Pourkashanian, Mohamed [Centre for Computational Fluid Dynamics, University of Leeds, Leeds (United Kingdom)

    2010-08-01

    In this study, multicomponent mass diffusion models, namely the Stefan-Maxwell model (SMM), the Dusty Gas model (DGM) and the Binary Friction model (BFM) have been compared in terms of their predictive capabilities of the concentration polarization of an anode supported solid oxide fuel cell (SOFC) anode. The results show that other than the pore diameter, current density and concentration of reactants, which have a high importance in concentration polarization predictions, the tortuosity (or porosity/tortuosity) term, has a substantial effect on the model predictions. Contrary to the previous discussions in the literature, for the fitted value of tortuosities, SMM and DGM predictions are similar, even for an average pore radius as small as 2.6e-07 and current density as high as 1.5 A cm{sup -2}. Also it is shown that the BFM predictions are similar to DGM for the case investigated in this study. Moreover, in this study, the effect of the pressure gradient term in the DGM and the BFM has been investigated by including and excluding this term from the model equations. It is shown that for the case investigated and model assumptions used in this study, the terms including the pressure coefficient have an insignificant effect on the predictions of both DGM and BFM and therefore they can be neglected. (author)

  10. Ab Initio Predictions of Structures and Densities of Energetic Solids

    National Research Council Canada - National Science Library

    Rice, Betsy M; Sorescu, Dan C

    2004-01-01

    We have applied a powerful simulation methodology known as ab initio crystal prediction to assess the ability of a generalized model of CHNO intermolecular interactions to predict accurately crystal...

  11. Exploring the Role of the Spatial Characteristics of Visible and Near-Infrared Reflectance in Predicting Soil Organic Carbon Density

    Directory of Open Access Journals (Sweden)

    Long Guo

    2017-10-01

    Full Text Available Soil organic carbon stock plays a key role in the global carbon cycle and the precision agriculture. Visible and near-infrared reflectance spectroscopy (VNIRS can directly reflect the internal physical construction and chemical substances of soil. The partial least squares regression (PLSR is a classical and highly commonly used model in constructing soil spectral models and predicting soil properties. Nevertheless, using PLSR alone may not consider soil as characterized by strong spatial heterogeneity and dependence. However, considering the spatial characteristics of soil can offer valuable spatial information to guarantee the prediction accuracy of soil spectral models. Thus, this study aims to construct a rapid and accurate soil spectral model in predicting soil organic carbon density (SOCD with the aid of the spatial autocorrelation of soil spectral reflectance. A total of 231 topsoil samples (0–30 cm were collected from the Jianghan Plain, Wuhan, China. The spectral reflectance (350–2500 nm was used as auxiliary variable. A geographically-weighted regression (GWR model was used to evaluate the potential improvement of SOCD prediction when the spatial information of the spectral features was considered. Results showed that: (1 The principal components extracted from PLSR have a strong relationship with the regression coefficients at the average sampling distance (300 m based on the Moran’s I values. (2 The eigenvectors of the principal components exhibited strong relationships with the absorption spectral features, and the regression coefficients of GWR varied with the geographical locations. (3 GWR displayed a higher accuracy than that of PLSR in predicting the SOCD by VNIRS. This study aimed to help people realize the importance of the spatial characteristics of soil properties and their spectra. This work also introduced guidelines for the application of GWR in predicting soil properties by VNIRS.

  12. Representation and validation of liquid densities for pure compounds and mixtures

    DEFF Research Database (Denmark)

    Diky, Vladimir; O'Connell, John P.; Abildskov, Jens

    2015-01-01

    Reliable correlation and prediction of liquid densities are important for designing chemical processes at normal and elevated pressures. A corresponding-states model from molecular theory was extended to yield a robust method for quality testing of experimental data that also provides predicted...... values at unmeasured conditions. The model has been shown to successfully represent and validate the pressure and temperature dependence of liquid densities greater than 1.5 of the critical density for pure compounds, binary mixtures, and ternary mixtures from the triple to critical temperatures...

  13. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  14. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  15. Application of Fracture Distribution Prediction Model in Xihu Depression of East China Sea

    Science.gov (United States)

    Yan, Weifeng; Duan, Feifei; Zhang, Le; Li, Ming

    2018-02-01

    There are different responses on each of logging data with the changes of formation characteristics and outliers caused by the existence of fractures. For this reason, the development of fractures in formation can be characterized by the fine analysis of logging curves. The well logs such as resistivity, sonic transit time, density, neutron porosity and gamma ray, which are classified as conventional well logs, are more sensitive to formation fractures. In view of traditional fracture prediction model, using the simple weighted average of different logging data to calculate the comprehensive fracture index, are more susceptible to subjective factors and exist a large deviation, a statistical method is introduced accordingly. Combining with responses of conventional logging data on the development of formation fracture, a prediction model based on membership function is established, and its essence is to analyse logging data with fuzzy mathematics theory. The fracture prediction results in a well formation in NX block of Xihu depression through two models are compared with that of imaging logging, which shows that the accuracy of fracture prediction model based on membership function is better than that of traditional model. Furthermore, the prediction results are highly consistent with imaging logs and can reflect the development of cracks much better. It can provide a reference for engineering practice.

  16. Model FT631 moisture/density combined gauge

    International Nuclear Information System (INIS)

    Ji Changsong; Dai Zhude; Zhang Jianguo; Zhang Enshang; Huang Jiling; Meng Qingbao

    1990-01-01

    Model FT631 Moisture/Density Combined Gauge has been developed, with which both water content and density, the two parameters of measured medium (soil), are obtained in one act of measurement at the same time. A China patent has been taken for this invention

  17. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  18. The Indigo Molecule Revisited Again: Assessment of the Minnesota Family of Density Functionals for the Prediction of Its Maximum Absorption Wavelengths in Various Solvents

    Directory of Open Access Journals (Sweden)

    Francisco Cervantes-Navarro

    2013-01-01

    Full Text Available The Minnesota family of density functionals (M05, M05-2X, M06, M06L, M06-2X, and M06-HF were evaluated for the calculation of the UV-Vis spectra of the indigo molecule in solvents of different polarities using time-dependent density functional theory (TD-DFT and the polarized continuum model (PCM. The maximum absorption wavelengths predicted for each functional were compared with the known experimental results.

  19. Molecular Model for HNBR with Tunable Cross-Link Density.

    Science.gov (United States)

    Molinari, N; Khawaja, M; Sutton, A P; Mostofi, A A

    2016-12-15

    We introduce a chemically inspired, all-atom model of hydrogenated nitrile butadiene rubber (HNBR) and assess its performance by computing the mass density and glass-transition temperature as a function of cross-link density in the structure. Our HNBR structures are created by a procedure that mimics the real process used to produce HNBR, that is, saturation of the carbon-carbon double bonds in NBR, either by hydrogenation or by cross-linking. The atomic interactions are described by the all-atom "Optimized Potentials for Liquid Simulations" (OPLS-AA). In this paper, first, we assess the use of OPLS-AA in our models, especially using NBR bulk properties, and second, we evaluate the validity of the proposed model for HNBR by investigating mass density and glass transition as a function of the tunable cross-link density. Experimental densities are reproduced within 3% for both elastomers, and qualitatively correct trends in the glass-transition temperature as a function of monomer composition and cross-link density are obtained.

  20. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  1. Osteoporosis risk prediction for bone mineral density assessment of postmenopausal women using machine learning.

    Science.gov (United States)

    Yoo, Tae Keun; Kim, Sung Kean; Kim, Deok Won; Choi, Joon Yul; Lee, Wan Hyung; Oh, Ein; Park, Eun-Cheol

    2013-11-01

    A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women compared to the ability of conventional clinical decision tools. We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Examination Surveys. The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests, artificial neural networks (ANN), and logistic regression (LR) based on simple surveys. The machine learning models were compared to four conventional clinical decision tools: osteoporosis self-assessment tool (OST), osteoporosis risk assessment instrument (ORAI), simple calculated osteoporosis risk estimation (SCORE), and osteoporosis index of risk (OSIRIS). SVM had significantly better area under the curve (AUC) of the receiver operating characteristic than ANN, LR, OST, ORAI, SCORE, and OSIRIS for the training set. SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0% at total hip, femoral neck, or lumbar spine for the testing set. The significant factors selected by SVM were age, height, weight, body mass index, duration of menopause, duration of breast feeding, estrogen therapy, hyperlipidemia, hypertension, osteoarthritis, and diabetes mellitus. Considering various predictors associated with low bone density, the machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.

  2. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  3. Using broad landscape level features to predict redd densities of steelhead trout (Oncorhynchus mykiss) and Chinook Salmon (Oncorhynchus tshawytscha) in the Methow River watershed, Washington

    Science.gov (United States)

    Romine, Jason G.; Perry, Russell W.; Connolly, Patrick J.

    2013-01-01

    We used broad-scale landscape feature variables to model redd densities of spring Chinook salmon (Oncorhynchus tshawytscha) and steelhead trout (Oncorhynchus mykiss) in the Methow River watershed. Redd densities were estimated from redd counts conducted from 2005 to 2007 and 2009 for steelhead trout and 2005 to 2009 for spring Chinook salmon. These densities were modeled using generalized linear mixed models. Variables examined included primary and secondary geology type, habitat type, flow type, sinuosity, and slope of stream channel. In addition, we included spring effect and hatchery effect variables to account for high densities of redds near known springs and hatchery outflows. Variables were associated with National Hydrography Database reach designations for modeling redd densities within each reach. Reaches were assigned a dominant habitat type, geology, mean slope, and sinuosity. The best fit model for spring Chinook salmon included sinuosity, critical slope, habitat type, flow type, and hatchery effect. Flow type, slope, and habitat type variables accounted for most of the variation in the data. The best fit model for steelhead trout included year, habitat type, flow type, hatchery effect, and spring effect. The spring effect, flow type, and hatchery effect variables explained most of the variation in the data. Our models illustrate how broad-scale landscape features may be used to predict spawning habitat over large areas where fine-scale data may be lacking.

  4. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  5. Escherichia coli bacteria density in relation to turbidity, streamflow characteristics, and season in the Chattahoochee River near Atlanta, Georgia, October 2000 through September 2008—Description, statistical analysis, and predictive modeling

    Science.gov (United States)

    Lawrence, Stephen J.

    2012-01-01

    Water-based recreation—such as rafting, canoeing, and fishing—is popular among visitors to the Chattahoochee River National Recreation Area (CRNRA) in north Georgia. The CRNRA is a 48-mile reach of the Chattahoochee River upstream from Atlanta, Georgia, managed by the National Park Service (NPS). Historically, high densities of fecal-indicator bacteria have been documented in the Chattahoochee River and its tributaries at levels that commonly exceeded Georgia water-quality standards. In October 2000, the NPS partnered with the U.S. Geological Survey (USGS), State and local agencies, and non-governmental organizations to monitor Escherichia coli bacteria (E. coli) density and develop a system to alert river users when E. coli densities exceeded the U.S. Environmental Protection Agency (USEPA) single-sample beach criterion of 235 colonies (most probable number) per 100 milliliters (MPN/100 mL) of water. This program, called BacteriALERT, monitors E. coli density, turbidity, and water temperature at two sites on the Chattahoochee River upstream from Atlanta, Georgia. This report summarizes E. coli bacteria density and turbidity values in water samples collected between 2000 and 2008 as part of the BacteriALERT program; describes the relations between E. coli density and turbidity, streamflow characteristics, and season; and describes the regression analyses used to develop predictive models that estimate E. coli density in real time at both sampling sites.

  6. Predictions of new AB O3 perovskite compounds by combining machine learning and density functional theory

    Science.gov (United States)

    Balachandran, Prasanna V.; Emery, Antoine A.; Gubernatis, James E.; Lookman, Turab; Wolverton, Chris; Zunger, Alex

    2018-04-01

    We apply machine learning (ML) methods to a database of 390 experimentally reported A B O3 compounds to construct two statistical models that predict possible new perovskite materials and possible new cubic perovskites. The first ML model classified the 390 compounds into 254 perovskites and 136 that are not perovskites with a 90% average cross-validation (CV) accuracy; the second ML model further classified the perovskites into 22 known cubic perovskites and 232 known noncubic perovskites with a 94% average CV accuracy. We find that the most effective chemical descriptors affecting our classification include largely geometric constructs such as the A and B Shannon ionic radii, the tolerance and octahedral factors, the A -O and B -O bond length, and the A and B Villars' Mendeleev numbers. We then construct an additional list of 625 A B O3 compounds assembled from charge conserving combinations of A and B atoms absent from our list of known compounds. Then, using the two ML models constructed on the known compounds, we predict that 235 of the 625 exist in a perovskite structure with a confidence greater than 50% and among them that 20 exist in the cubic structure (albeit, the latter with only ˜50 % confidence). We find that the new perovskites are most likely to occur when the A and B atoms are a lanthanide or actinide, when the A atom is an alkali, alkali earth, or late transition metal atom, or when the B atom is a p -block atom. We also compare the ML findings with the density functional theory calculations and convex hull analyses in the Open Quantum Materials Database (OQMD), which predicts the T =0 K ground-state stability of all the A B O3 compounds. We find that OQMD predicts 186 of 254 of the perovskites in the experimental database to be thermodynamically stable within 100 meV/atom of the convex hull and predicts 87 of the 235 ML-predicted perovskite compounds to be thermodynamically stable within 100 meV/atom of the convex hull, including 6 of these to

  7. Nuclear symmetry energy in density dependent hadronic models

    International Nuclear Information System (INIS)

    Haddad, S.

    2008-12-01

    The density dependence of the symmetry energy and the correlation between parameters of the symmetry energy and the neutron skin thickness in the nucleus 208 Pb are investigated in relativistic Hadronic models. The dependency of the symmetry energy on density is linear around saturation density. Correlation exists between the neutron skin thickness in the nucleus 208 Pb and the value of the nuclear symmetry energy at saturation density, but not with the slope of the symmetry energy at saturation density. (author)

  8. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  9. Predictive Modeling of Black Spruce (Picea mariana (Mill. B.S.P. Wood Density Using Stand Structure Variables Derived from Airborne LiDAR Data in Boreal Forests of Ontario

    Directory of Open Access Journals (Sweden)

    Bharat Pokharel

    2016-12-01

    Full Text Available Our objective was to model the average wood density in black spruce trees in representative stands across a boreal forest landscape based on relationships with predictor variables extracted from airborne light detection and ranging (LiDAR point cloud data. Increment core samples were collected from dominant or co-dominant black spruce trees in a network of 400 m2 plots distributed among forest stands representing the full range of species composition and stand development across a 1,231,707 ha forest management unit in northeastern Ontario, Canada. Wood quality data were generated from optical microscopy, image analysis, X-ray densitometry and diffractometry as employed in SilviScan™. Each increment core was associated with a set of field measurements at the plot level as well as a suite of LiDAR-derived variables calculated on a 20 × 20 m raster from a wall-to-wall coverage at a resolution of ~1 point m−2. We used a multiple linear regression approach to identify important predictor variables and describe relationships between stand structure and wood density for average black spruce trees in the stands we observed. A hierarchical classification model was then fitted using random forests to make spatial predictions of mean wood density for average trees in black spruce stands. The model explained 39 percent of the variance in the response variable, with an estimated root mean square error of 38.8 (kg·m−3. Among the predictor variables, P20 (second decile LiDAR height in m and quadratic mean diameter were most important. Other predictors describing canopy depth and cover were of secondary importance and differed according to the modeling approach. LiDAR-derived variables appear to capture differences in stand structure that reflect different constraints on growth rates, determining the proportion of thin-walled earlywood cells in black spruce stems, and ultimately influencing the pattern of variation in important wood quality attributes

  10. On The Importance of Connecting Laboratory Measurements of Ice Crystal Growth with Model Parameterizations: Predicting Ice Particle Properties

    Science.gov (United States)

    Harrington, J. Y.

    2017-12-01

    Parameterizing the growth of ice particles in numerical models is at an interesting cross-roads. Most parameterizations developed in the past, including some that I have developed, parse model ice into numerous categories based primarily on the growth mode of the particle. Models routinely possess smaller ice, snow crystals, aggregates, graupel, and hail. The snow and ice categories in some models are further split into subcategories to account for the various shapes of ice. There has been a relatively recent shift towards a new class of microphysical models that predict the properties of ice particles instead of using multiple categories and subcategories. Particle property models predict the physical characteristics of ice, such as aspect ratio, maximum dimension, effective density, rime density, effective area, and so forth. These models are attractive in the sense that particle characteristics evolve naturally in time and space without the need for numerous (and somewhat artificial) transitions among pre-defined classes. However, particle property models often require fundamental parameters that are typically derived from laboratory measurements. For instance, the evolution of particle shape during vapor depositional growth requires knowledge of the growth efficiencies for the various axis of the crystals, which in turn depends on surface parameters that can only be determined in the laboratory. The evolution of particle shapes and density during riming, aggregation, and melting require data on the redistribution of mass across a crystals axis as that crystal collects water drops, ice crystals, or melts. Predicting the evolution of particle properties based on laboratory-determined parameters has a substantial influence on the evolution of some cloud systems. Radiatively-driven cirrus clouds show a broader range of competition between heterogeneous nucleation and homogeneous freezing when ice crystal properties are predicted. Even strongly convective squall

  11. Adsorption of CH{sub 4} on nitrogen- and boron-containing carbon models of coal predicted by density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiao-Qiang [College of Chemistry, Key Lab of Green Chemistry and Technology in Ministry of Education, Sichuan University, Chengdu 610064 (China); Xue, Ying, E-mail: yxue@scu.edu.cn [College of Chemistry, Key Lab of Green Chemistry and Technology in Ministry of Education, Sichuan University, Chengdu 610064 (China); Tian, Zhi-Yue; Mo, Jing-Jing; Qiu, Nian-Xiang [College of Chemistry, Key Lab of Green Chemistry and Technology in Ministry of Education, Sichuan University, Chengdu 610064 (China); Chu, Wei [Department of Chemical Engineering, Sichuan University, Chengdu 610065 (China); Xie, He-Ping [Key Laboratory of Energy Engineering Safety and Mechanics on Disasters, The Ministry of Education, Sichuan University, Chengdu 610065 (China)

    2013-11-15

    Graphene doped by nitrogen (N) and/or boron (B) is used to represent the surface models of coal with the structural heterogeneity. Through the density functional theory (DFT) calculations, the interactions between coalbed methane (CBM) and coal surfaces have been investigated. Several adsorption sites and orientations of methane (CH{sub 4}) on graphenes were systematically considered. Our calculations predicted adsorption energies of CH{sub 4} on graphenes of up to −0.179 eV, with the strongest binding mode in which three hydrogen atoms of CH{sub 4} direct to graphene surface, observed for N-doped graphene, compared to the perfect (−0.154 eV), B-doped (−0.150 eV), and NB-doped graphenes (−0.170 eV). Doping N in graphene increases the adsorption energies of CH{sub 4}, but slightly reduced binding is found when graphene is doped by B. Our results indicate that all of graphenes act as the role of a weak electron acceptor with respect to CH{sub 4}. The interactions between CH{sub 4} and graphenes are the physical adsorption and slightly depend upon the adsorption sites on graphenes and the orientations of methane as well as the electronegativity of dopant atoms in graphene.

  12. Predictive Value of Early Tumor Shrinkage and Density Reduction of Lung Metastases in Patients With Metastatic Colorectal Cancer Treated With Regorafenib.

    Science.gov (United States)

    Vanwynsberghe, Hannes; Verbeke, Xander; Coolen, Johan; Van Cutsem, Eric

    2017-12-01

    The benefit of regorafenib in colorectal cancer is not very pronounced. At present, there is lack of predictive biological or radiological markers. We studied if density reduction or small changes in size of lung metastases could be a predictive marker. We retrospectively measured density in size of lung metastases of all patients included in the CORRECT and CONSIGN trials at our center. Contrast-enhanced CT scan at baseline and at week 8 were compared. Data of progressive-free survival and overall survival were collected from the CORRECT and CONSIGN trials. A significant difference in progressive-free survival was seen in 3 groups: response or stable disease in size (5.36 vs. 3.96 months), response in density (6.03 vs. 2.72 months), and response in corrected density (6.14 vs. 3.08 months). No difference was seen for response in size versus stable disease or progressive disease in size. For overall survival, a difference was observed in the same 3 groups: response or stable disease in size (9.89 vs. 6.44 months), response in density (9.59 vs. 7.04 months), and response in corrected density (9.09 vs. 7.16 months). No difference was seen for response in size versus stable disease or progressive disease in size. Density reduction in lung metastases might be a good predictive parameter to predict outcome for regorafenib. Early tumor progression might be a negative predictive factor. If further validated, density reduction and early tumor progression might be useful to ameliorate the cost-benefit of regorafenib. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Modeling of branching density and branching distribution in low-density polyethylene polymerization

    NARCIS (Netherlands)

    Kim, D.M.; Iedema, P.D.

    2008-01-01

    Low-density polyethylene (ldPE) is a general purpose polymer with various applications. By this reason, many publications can be found on the ldPE polymerization modeling. However, scission reaction and branching distribution are only recently considered in the modeling studies due to difficulties

  14. High baryon density from relativistic heavy ion collisions

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Y.; Kahana, S.H. [Brookhaven National Lab., Upton, NY (United States); Schlagel, T.J. [Brookhaven National Lab., Upton, NY (United States)]|[State Univ. of New York, Stony Brook, NY (United States)

    1993-10-01

    A quantitative model, based on hadronic physics, is developed and applied to heavy ion collisions at BNL-AGS energies. This model is in excellent agreement with observed particle spectra in heavy ion collisions using Si beams, where baryon densities of three and four times the normal nuclear matter density ({rho}{sub 0}) are reached. For Au on Au collisions, the authors predict the formation of matter at very high densities (up to 10 {rho}{sub 0}).

  15. Axial asymmetry of excited heavy nuclei as essential feature for the prediction of level densities

    Energy Technology Data Exchange (ETDEWEB)

    Grosse, Eckart [Institute of Nuclear and Particle Physics, Technische Universitaet Dresden (Germany); Junghans, Arnd R. [Institute of Radiation Physics, Helmholtz-Zentrum Dresden-Rossendorf (Germany); Massarczyk, Ralph [Los Alamos National Laboratory, New Mexico (United States)

    2016-07-01

    In previous studies a considerable improvement of predictions for neutron resonance spacings by a modified back-shifted Fermi-gas model (BSFM) was found. The modifications closely follow the basic principles for a gas of weakly bound Fermions as given in text books of statistical physics: (1) Phase transition at a temperature defined by theory, (2) pairing condensation independent of A, and (3) proportionality of entropy to temperature (and thus the level density parameter) fixed by the Fermi energy. For finite nuclei we add: (4) the back-shift energy is defined by shell correction and (5) the collective enhancement is enlarged by allowing the axial symmetry to be broken. Nearly no parameter fitting is needed to arrive at a good reproduction of level density information obtained by various methods for a number of nuclei in a wide range of A and E. To that end the modified BSFM is complemented by a constant temperature approximation below the phase transition point. The axial symmetry breaking (5), which is an evidently essential feature, will also be regarded with respect to other observables for heavy nuclei.

  16. Urbanization impacts on mammals across urban-forest edges and a predictive model of edge effects.

    Directory of Open Access Journals (Sweden)

    Nélida R Villaseñor

    Full Text Available With accelerating rates of urbanization worldwide, a better understanding of ecological processes at the wildland-urban interface is critical to conserve biodiversity. We explored the effects of high and low-density housing developments on forest-dwelling mammals. Based on habitat characteristics, we expected a gradual decline in species abundance across forest-urban edges and an increased decline rate in higher contrast edges. We surveyed arboreal mammals in sites of high and low housing density along 600 m transects that spanned urban areas and areas turn on adjacent native forest. We also surveyed forest controls to test whether edge effects extended beyond our edge transects. We fitted models describing richness, total abundance and individual species abundance. Low-density housing developments provided suitable habitat for most arboreal mammals. In contrast, high-density housing developments had lower species richness, total abundance and individual species abundance, but supported the highest abundances of an urban adapter (Trichosurus vulpecula. We did not find the predicted gradual decline in species abundance. Of four species analysed, three exhibited no response to the proximity of urban boundaries, but spilled over into adjacent urban habitat to differing extents. One species (Petaurus australis had an extended negative response to urban boundaries, suggesting that urban development has impacts beyond 300 m into adjacent forest. Our empirical work demonstrates that high-density housing developments have negative effects on both community and species level responses, except for one urban adapter. We developed a new predictive model of edge effects based on our results and the literature. To predict animal responses across edges, our framework integrates for first time: (1 habitat quality/preference, (2 species response with the proximity to the adjacent habitat, and (3 spillover extent/sensitivity to adjacent habitat boundaries. This

  17. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  18. A New Approach to Modeling Densities and Equilibria of Ice and Gas Hydrate Phases

    Science.gov (United States)

    Zyvoloski, G.; Lucia, A.; Lewis, K. C.

    2011-12-01

    The Gibbs-Helmholtz Constrained (GHC) equation is a new cubic equation of state that was recently derived by Lucia (2010) and Lucia et al. (2011) by constraining the energy parameter in the Soave form of the Redlich-Kwong equation to satisfy the Gibbs-Helmholtz equation. The key attributes of the GHC equation are: 1) It is a multi-scale equation because it uses the internal energy of departure, UD, as a natural bridge between the molecular and bulk phase length scales. 2) It does not require acentric factors, volume translation, regression of parameters to experimental data, binary (kij) interaction parameters, or other forms of empirical correlations. 3) It is a predictive equation of state because it uses a database of values of UD determined from NTP Monte Carlo simulations. 4) It can readily account for differences in molecular size and shape. 5) It has been successfully applied to non-electrolyte mixtures as well as weak and strong aqueous electrolyte mixtures over wide ranges of temperature, pressure and composition to predict liquid density and phase equilibrium with up to four phases. 6) It has been extensively validated with experimental data. 7) The AAD% error between predicted and experimental liquid density is 1% while the AAD% error in phase equilibrium predictions is 2.5%. 8) It has been used successfully within the subsurface flow simulation program FEHM. In this work we describe recent extensions of the multi-scale predictive GHC equation to modeling the phase densities and equilibrium behavior of hexagonal ice and gas hydrates. In particular, we show that radial distribution functions, which can be determined by NTP Monte Carlo simulations, can be used to establish correct standard state fugacities of 1h ice and gas hydrates. From this, it is straightforward to determine both the phase density of ice or gas hydrates as well as any equilibrium involving ice and/or hydrate phases. A number of numerical results for mixtures of N2, O2, CH4, CO2, water

  19. New models for predicting thermophysical properties of ionic liquid mixtures.

    Science.gov (United States)

    Huang, Ying; Zhang, Xiangping; Zhao, Yongsheng; Zeng, Shaojuan; Dong, Haifeng; Zhang, Suojiang

    2015-10-28

    Potential applications of ILs require the knowledge of the physicochemical properties of ionic liquid (IL) mixtures. In this work, a series of semi-empirical models were developed to predict the density, surface tension, heat capacity and thermal conductivity of IL mixtures. Each semi-empirical model only contains one new characteristic parameter, which can be determined using one experimental data point. In addition, as another effective tool, artificial neural network (ANN) models were also established. The two kinds of models were verified by a total of 2304 experimental data points for binary mixtures of ILs and molecular compounds. The overall average absolute deviations (AARDs) of both the semi-empirical and ANN models are less than 2%. Compared to previously reported models, these new semi-empirical models require fewer adjustable parameters and can be applied in a wider range of applications.

  20. Bayesian modeling of the mass and density of asteroids

    Science.gov (United States)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  1. Predicting occupancy for pygmy rabbits in Wyoming: an independent evaluation of two species distribution models

    Science.gov (United States)

    Germaine, Stephen S.; Ignizio, Drew; Keinath, Doug; Copeland, Holly

    2014-01-01

    Species distribution models are an important component of natural-resource conservation planning efforts. Independent, external evaluation of their accuracy is important before they are used in management contexts. We evaluated the classification accuracy of two species distribution models designed to predict the distribution of pygmy rabbit Brachylagus idahoensis habitat in southwestern Wyoming, USA. The Nature Conservancy model was deductive and based on published information and expert opinion, whereas the Wyoming Natural Diversity Database model was statistically derived using historical observation data. We randomly selected 187 evaluation survey points throughout southwestern Wyoming in areas predicted to be habitat and areas predicted to be nonhabitat for each model. The Nature Conservancy model correctly classified 39 of 77 (50.6%) unoccupied evaluation plots and 65 of 88 (73.9%) occupied plots for an overall classification success of 63.3%. The Wyoming Natural Diversity Database model correctly classified 53 of 95 (55.8%) unoccupied plots and 59 of 88 (67.0%) occupied plots for an overall classification success of 61.2%. Based on 95% asymptotic confidence intervals, classification success of the two models did not differ. The models jointly classified 10.8% of the area as habitat and 47.4% of the area as nonhabitat, but were discordant in classifying the remaining 41.9% of the area. To evaluate how anthropogenic development affected model predictive success, we surveyed 120 additional plots among three density levels of gas-field road networks. Classification success declined sharply for both models as road-density level increased beyond 5 km of roads per km-squared area. Both models were more effective at predicting habitat than nonhabitat in relatively undeveloped areas, and neither was effective at accounting for the effects of gas-energy-development road networks. Resource managers who wish to know the amount of pygmy rabbit habitat present in an

  2. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  3. Simple Predictive Models for Saturated Hydraulic Conductivity of Technosands

    DEFF Research Database (Denmark)

    Arthur, Emmanuel; Razzaghi, Fatemeh; Møldrup, Per

    2012-01-01

    Accurate estimation of saturated hydraulic conductivity (Ks) of technosands (gravel-free, coarse sands with negligible organic matter content) is important for irrigation and drainage management of athletic fields and golf courses. In this study, we developed two simple models for predicting Ks......-Rammler particle size distribution (PSD) function. The Ks and PSD data of 14 golf course sands from literature as well as newly measured data for a size fraction of Lunar Regolith Simulant, packed at three different dry bulk densities, were used for model evaluation. The pore network tortuosity......-connectivity parameter (m) obtained for pure coarse sand after fitting to measured Ks data was 1.68 for both models and in good agreement with m values obtained from recent solute and gas diffusion studies. Both the modified K-C and R-C models are easy to use and require limited parameter input, and both models gave...

  4. Prediction of melanoma metastasis by the Shields index based on lymphatic vessel density

    Directory of Open Access Journals (Sweden)

    Metcalfe Chris

    2010-05-01

    Full Text Available Abstract Background Melanoma usually presents as an initial skin lesion without evidence of metastasis. A significant proportion of patients develop subsequent local, regional or distant metastasis, sometimes many years after the initial lesion was removed. The current most effective staging method to identify early regional metastasis is sentinel lymph node biopsy (SLNB, which is invasive, not without morbidity and, while improving staging, may not improve overall survival. Lymphatic density, Breslow's thickness and the presence or absence of lymphatic invasion combined has been proposed to be a prognostic index of metastasis, by Shields et al in a patient group. Methods Here we undertook a retrospective analysis of 102 malignant melanomas from patients with more than five years follow-up to evaluate the Shields' index and compare with existing indicators. Results The Shields' index accurately predicted outcome in 90% of patients with metastases and 84% without metastases. For these, the Shields index was more predictive than thickness or lymphatic density. Alternate lymphatic measurement (hot spot analysis was also effective when combined into the Shields index in a cohort of 24 patients. Conclusions These results show the Shields index, a non-invasive analysis based on immunohistochemistry of lymphatics surrounding primary lesions that can accurately predict outcome, is a simple, useful prognostic tool in malignant melanoma.

  5. Modeling of Materials for Energy Storage: A Challenge for Density Functional Theory

    Science.gov (United States)

    Kaltak, Merzuk; Fernandez-Serra, Marivi; Hybertsen, Mark S.

    Hollandite α-MnO2 is a promising material for rechargeable batteries and is studied extensively in the community because of its interesting tunnel structure and the corresponding large capacity for lithium as well as sodium ions. However, the presence of partially reduced Mn ions due to doping with Ag or during lithiation makes hollandite a challenging system for density functional theory and the conventionally employed PBE+U method. A naive attempt to model the ternary system LixAgyMnO2 with density functionals, similar to those employed for the case y = 0 , fails and predicts a strong monoclinic distortion of the experimentally observed tetragonal unit cell for Ag2Mn8O16. Structure and binding energies are compared with experimental data and show the importance of van der Waals interactions as well as the necessity for an accurate description of the cooperative Jan-Teller effects for silver hollandite AgyMnO2. Based on these observations a ternary phase diagram is calculated allowing to predict the physical and chemical properties of LixAgyMnO2, such as stable stoichiometries, open circuit voltages, the formation of Ag metal and the structural change during lithiation. Department of Energy (DOE) under award #DE-SC0012673.

  6. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids.

  7. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    Science.gov (United States)

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  8. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  9. Model representations of kerogen structures: An insight from density functional theory calculations and spectroscopic measurements.

    Science.gov (United States)

    Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M

    2017-08-01

    Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.

  10. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    Energy Technology Data Exchange (ETDEWEB)

    Panebianco, Stefano; Lemaître, Jean-Francois; Sida, Jean-Luc [CEA Centre de Saclay, Gif-sur-Ivette (France); Dubray, Noëel [CEA, DAM, DIF, Arpajon (France); Goriely, Stephane [Institut d' Astronomie et d' Astrophisique, Universite Libre de Bruxelles, Brussels (Belgium)

    2014-07-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed. (author)

  11. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  12. Model Insensitive and Calibration Independent Method for Determination of the Downstream Neutral Hydrogen Density Through Ly-alpha Glow Observations

    Science.gov (United States)

    Gangopadhyay, P.; Judge, D. L.

    1996-01-01

    Our knowledge of the various heliospheric phenomena (location of the solar wind termination shock, heliopause configuration and very local interstellar medium parameters) is limited by uncertainties in the available heliospheric plasma models and by calibration uncertainties in the observing instruments. There is, thus, a strong motivation to develop model insensitive and calibration independent methods to reduce the uncertainties in the relevant heliospheric parameters. We have developed such a method to constrain the downstream neutral hydrogen density inside the heliospheric tail. In our approach we have taken advantage of the relative insensitivity of the downstream neutral hydrogen density profile to the specific plasma model adopted. We have also used the fact that the presence of an asymmetric neutral hydrogen cavity surrounding the sun, characteristic of all neutral densities models, results in a higher multiple scattering contribution to the observed glow in the downstream region than in the upstream region. This allows us to approximate the actual density profile with one which is spatially uniform for the purpose of calculating the downstream backscattered glow. Using different spatially constant density profiles, radiative transfer calculations are performed, and the radial dependence of the predicted glow is compared with the observed I/R dependence of Pioneer 10 UV data. Such a comparison bounds the large distance heliospheric neutral hydrogen density in the downstream direction to a value between 0.05 and 0.1/cc.

  13. Platelet density per monocyte predicts adverse events in patients after percutaneous coronary intervention.

    Science.gov (United States)

    Rutten, Bert; Roest, Mark; McClellan, Elizabeth A; Sels, Jan W; Stubbs, Andrew; Jukema, J Wouter; Doevendans, Pieter A; Waltenberger, Johannes; van Zonneveld, Anton-Jan; Pasterkamp, Gerard; De Groot, Philip G; Hoefer, Imo E

    2016-01-01

    Monocyte recruitment to damaged endothelium is enhanced by platelet binding to monocytes and contributes to vascular repair. Therefore, we studied whether the number of platelets per monocyte affects the recurrence of adverse events in patients after percutaneous coronary intervention (PCI). Platelet-monocytes complexes with high and low median fluorescence intensities (MFI) of the platelet marker CD42b were isolated using cell sorting. Microscopic analysis revealed that a high platelet marker MFI on monocytes corresponded with a high platelet density per monocyte while a low platelet marker MFI corresponded with a low platelet density per monocyte (3.4 ± 0.7 vs 1.4 ± 0.1 platelets per monocyte, P=0.01). Using real-time video microscopy, we observed increased recruitment of high platelet density monocytes to endothelial cells as compared with low platelet density monocytes (P=0.01). Next, we classified PCI scheduled patients (N=263) into groups with high, medium and low platelet densities per monocyte and assessed the recurrence of adverse events. After multivariate adjustment for potential confounders, we observed a 2.5-fold reduction in the recurrence of adverse events in patients with a high platelet density per monocyte as compared with a low platelet density per monocyte [hazard ratio=0.4 (95% confidence interval, 0.2-0.8), P=0.01]. We show that a high platelet density per monocyte increases monocyte recruitment to endothelial cells and predicts a reduction in the recurrence of adverse events in patients after PCI. These findings may imply that a high platelet density per monocyte protects against recurrence of adverse events.

  14. Mining for elastic constants of intermetallics from the charge density landscape

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Chang Sun; Broderick, Scott R. [Department of Materials Science and Engineering, Iowa State University, Ames, IA 50011 (United States); Jones, Travis E. [Molecular Theory Group, Colorado School of Mines, Golden, CO 80401 (United States); Loyola, Claudia [Department of Materials Science and Engineering, Iowa State University, Ames, IA 50011 (United States); Eberhart, Mark E. [Molecular Theory Group, Colorado School of Mines, Golden, CO 80401 (United States); Rajan, Krishna, E-mail: krajan@iastate.edu [Department of Materials Science and Engineering, Iowa State University, Ames, IA 50011 (United States)

    2015-02-01

    There is a significant challenge in designing new materials for targeted properties based on their electronic structure. While in principle this goal can be met using knowledge of the electron charge density, the relationships between the density and properties are largely unknown. To help overcome this problem we develop a quantitative structure–property relationship (QSPR) between the charge density and the elastic constants for B2 intermetallics. Using a combination of informatics techniques for screening all the potentially relevant charge density descriptors, we find that C{sub 11} and C{sub 44} are determined solely from the magnitude of the charge density at its critical points, while C{sub 12} is determined by the shape of the charge density at its critical points. From this reduced charge density selection space, we develop models for predicting the elastic constants of an expanded number of intermetallic systems, which we then use to predict the mechanical stability of new systems. Having reduced the descriptors necessary for modeling elastic constants, statistical learning approaches may then be used to predict the reduced knowledge-based required as a function of the constituent characteristics.

  15. Isobaric-Isothermal Molecular Dynamics Utilizing Density Functional Theory: An Assessment of the Structure and Density of Water at Near-Ambient Conditions

    International Nuclear Information System (INIS)

    Schmidt, J.; VandeVondele, J.; Kuo, I.W.; Sebastiani, D.; Siepmann, J.I.; Hutter, J.; Mundy, C.J.

    2009-01-01

    We present herein a comprehensive density functional theory study toward assessing the accuracy of two popular gradient-corrected exchange correlation functionals on the structure and density of liquid water at near ambient conditions in the isobaric-isothermal ensemble. Our results indicate that both PBE and BLYP functionals under predict the density and over structure the liquid. Adding the dispersion correction due to Grimme(1, 2) improves the predicted densities for both BLYP and PBE in a significant manner. Moreover, the addition of the dispersion correction for BLYP yields an oxygen-oxygen radial distribution function in excellent agreement with experiment. Thus, we conclude that one can obtain a very satisfactory model for water using BLYP and a correction for dispersion.

  16. Characterization of Mixtures. Part 2: QSPR Models for Prediction of Excess Molar Volume and Liquid Density Using Neural Networks.

    Science.gov (United States)

    Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J

    2010-09-17

    In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Application of the Eötvos and Guggenheim empirical rules for predicting the density and surface tension of ionic liquids analogues

    Energy Technology Data Exchange (ETDEWEB)

    Mjalli, Farouq S., E-mail: farouqsm@yahoo.com [Petroleum and Chemical Engineering Department, Sultan Qaboos University, 123 Sultanate of Oman (Oman); Vakili-Nezhaad, Gholamreza; Shahbaz, Kaveh [School of Engineering, Taylor' s University, 47500 Selangor (Malaysia); AlNashef, Inas M. [Chemical Engineering Department, King Saud University, Riyadh 11421 (Saudi Arabia)

    2014-01-10

    Highlights: • Critical temperatures of eight common DES were calculated using two methods. • Density and surface tension were calculated using the Rackett and Guggenheim equations. • The Rackett method should be used in the low temperature range only. • The Eötvos and Guggenheim methods gave best density and surface tension predictions. - Abstract: The recent continuing interest in deep eutectic solvents (DES) as ionic liquids analogues and their successful applications in different areas of separation necessities the existence of reliable physical and thermodynamic properties database. The scarcity of data on the physical properties of such solvents, increases the need for their prediction using reliable methods. In this study, first the critical temperatures of eight DES systems have been calculated based on the Eötvos empirical equation using the experimental data of the density and surface tension at various temperatures, then the density and surface tension values of these systems were predicted from the calculated critical temperatures. For the density prediction the Eötvos and Guggenheim equations were combined to introduce a simple power law equation using the estimated critical temperatures from the Eötvos and the Modified Lydersen–Joback–Reid group contribution methods. Finally, the estimated critical temperatures by these two methods were used in the Guggenheim empirical equation to calculate the surface tension of the DES systems. The prediction quality of the two physical properties under investigation were compared and proper recommendations were postulated.

  18. Application of the Eötvos and Guggenheim empirical rules for predicting the density and surface tension of ionic liquids analogues

    International Nuclear Information System (INIS)

    Mjalli, Farouq S.; Vakili-Nezhaad, Gholamreza; Shahbaz, Kaveh; AlNashef, Inas M.

    2014-01-01

    Highlights: • Critical temperatures of eight common DES were calculated using two methods. • Density and surface tension were calculated using the Rackett and Guggenheim equations. • The Rackett method should be used in the low temperature range only. • The Eötvos and Guggenheim methods gave best density and surface tension predictions. - Abstract: The recent continuing interest in deep eutectic solvents (DES) as ionic liquids analogues and their successful applications in different areas of separation necessities the existence of reliable physical and thermodynamic properties database. The scarcity of data on the physical properties of such solvents, increases the need for their prediction using reliable methods. In this study, first the critical temperatures of eight DES systems have been calculated based on the Eötvos empirical equation using the experimental data of the density and surface tension at various temperatures, then the density and surface tension values of these systems were predicted from the calculated critical temperatures. For the density prediction the Eötvos and Guggenheim equations were combined to introduce a simple power law equation using the estimated critical temperatures from the Eötvos and the Modified Lydersen–Joback–Reid group contribution methods. Finally, the estimated critical temperatures by these two methods were used in the Guggenheim empirical equation to calculate the surface tension of the DES systems. The prediction quality of the two physical properties under investigation were compared and proper recommendations were postulated

  19. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  20. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  1. Thermophysical properties of liquid UO{sub 2}, ZrO{sub 2} and corium by molecular dynamics and predictive models

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woong Kee; Shim, Ji Hoon [Pohang University of Science and Technology, Pohang (Korea, Republic of); Kaviany Massoud [University of Michigan, Ann Arbor (United States)

    2016-10-15

    The analysis of such accidents (fate of the melt), requires accurate corium thermophysical properties data up to 5000 K. In addition, the initial corium melt superheat melt, determined from such properties, are key in predicting the fuel-coolant interactions (FCIs) and convection and retention of corium in accident scenarios, e.g., core-melt down corium discharge from reactor pressure vessels and spreading in external core-catcher. Due to the high temperatures, data on molten corium and its constituents are limited, so there are much data scatters and mostly extrapolations (even from solid state) have been used. Here we predict the thermophysical properties of molten UO{sub 2} and ZrO{sub 2} using classical molecular dynamics (MD) simulations (properties of corium are predicted using the mixture theories and UO{sub 2} and ZrO{sub 2} properties). The thermophysical properties (density, compressibility, heat capacity, viscosity and surface tension) of liquid UO{sub 2} and ZrO{sub 2} are predicted using classical molecular dynamics simulations, up to 5000 K. For atomic interactions, the CRG and the Teter potential models are found most appropriate. The liquid behavior is verified with the random motion of the constituent atoms and the pair-distribution functions, starting with the solid phase and raising the temperature to realize liquid phase. The viscosity and thermal conductivity are calculated with the Green-Kubo autocorrelation decay formulae and compared with the predictive models of Andrade and Bridgman. For liquid UO{sub 2}, the CRG model gives satisfactory MD predictions. For ZrO{sub 2}, the density is reliably predicted with the CRG potential model, while the compressibility and viscosity are more accurately predicted by the Teter model.

  2. Modeling and evaluating of surface roughness prediction in micro-grinding on soda-lime glass considering tool characterization

    Science.gov (United States)

    Cheng, Jun; Gong, Yadong; Wang, Jinsheng

    2013-11-01

    The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion

  3. Intrinsic Density Matrices of the Nuclear Shell Model

    International Nuclear Information System (INIS)

    Deveikis, A.; Kamuntavichius, G.

    1996-01-01

    A new method for calculation of shell model intrinsic density matrices, defined as two-particle density matrices integrated over the centre-of-mass position vector of two last particles and complemented with isospin variables, has been developed. The intrinsic density matrices obtained are completely antisymmetric, translation-invariant, and do not employ a group-theoretical classification of antisymmetric states. They are used for exact realistic density matrix expansion within the framework of the reduced Hamiltonian method. The procedures based on precise arithmetic for calculation of the intrinsic density matrices that involve no numerical diagonalization or orthogonalization have been developed and implemented in the computer code. (author). 11 refs., 2 tabs

  4. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.

    2014-09-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  5. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.; Evans, Jason P.; McCabe, Matthew

    2014-01-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  6. Prediction of Reduction Potentials of Copper Proteins with Continuum Electrostatics and Density Functional Theory.

    Science.gov (United States)

    Fowler, Nicholas J; Blanford, Christopher F; Warwicker, Jim; de Visser, Sam P

    2017-11-02

    Blue copper proteins, such as azurin, show dramatic changes in Cu 2+ /Cu + reduction potential upon mutation over the full physiological range. Hence, they have important functions in electron transfer and oxidation chemistry and have applications in industrial biotechnology. The details of what determines these reduction potential changes upon mutation are still unclear. Moreover, it has been difficult to model and predict the reduction potential of azurin mutants and currently no unique procedure or workflow pattern exists. Furthermore, high-level computational methods can be accurate but are too time consuming for practical use. In this work, a novel approach for calculating reduction potentials of azurin mutants is shown, based on a combination of continuum electrostatics, density functional theory and empirical hydrophobicity factors. Our method accurately reproduces experimental reduction potential changes of 30 mutants with respect to wildtype within experimental error and highlights the factors contributing to the reduction potential change. Finally, reduction potentials are predicted for a series of 124 new mutants that have not yet been investigated experimentally. Several mutants are identified that are located well over 10 Å from the copper center that change the reduction potential by more than 85 mV. The work shows that secondary coordination sphere mutations mostly lead to long-range electrostatic changes and hence can be modeled accurately with continuum electrostatics. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  7. Big-bang nucleosynthesis and the baryon density of the universe.

    Science.gov (United States)

    Copi, C J; Schramm, D N; Turner, M S

    1995-01-13

    For almost 30 years, the predictions of big-bang nucleosynthesis have been used to test the big-bang model to within a fraction of a second of the bang. The agreement between the predicted and observed abundances of deuterium, helium-3, helium-4, and lithium-7 confirms the standard cosmology model and allows accurate determination of the baryon density, between 1.7 x 10(-31) and 4.1 x 10(-31) grams per cubic centimeter (corresponding to about 1 to 15 percent of the critical density). This measurement of the density of ordinary matter is pivotal to the establishment of two dark-matter problems: (i) most of the baryons are dark, and (ii) if the total mass density is greater than about 15 percent of the critical density, as many determinations indicate, the bulk of the dark matter must be "non-baryonic," composed of elementary particles left from the earliest moments.

  8. Increased consumer density reduces the strength of neighborhood effects in a model system.

    Science.gov (United States)

    Merwin, Andrew C; Underwood, Nora; Inouye, Brian D

    2017-11-01

    An individual's susceptibility to attack can be influenced by conspecific and heterospecifics neighbors. Predicting how these neighborhood effects contribute to population-level processes such as competition and evolution requires an understanding of how the strength of neighborhood effects is modified by changes in the abundances of both consumers and neighboring resource species. We show for the first time that consumer density can interact with the density and frequency of neighboring organisms to determine the magnitude of neighborhood effects. We used the bean beetle, Callosobruchus maculatus, and two of its host beans, Vigna unguiculata and V. radiata, to perform a response-surface experiment with a range of resource densities and three consumer densities. At low beetle density, damage to beans was reduced with increasing conspecific density (i.e., resource dilution) and damage to the less preferred host, V. unguiculata, was reduced with increasing V. radiata frequency (i.e., frequency-dependent associational resistance). As beetle density increased, however, neighborhood effects were reduced; at the highest beetle densities neither focal nor neighboring resource density nor frequency influenced damage. These findings illustrate the importance of consumer density in mediating indirect effects among resources, and suggest that accounting for consumer density may improve our ability to predict population-level outcomes of neighborhood effects and our use of them in applications such as mixed-crop pest management. © 2017 by the Ecological Society of America.

  9. Teaching Chemistry with Electron Density Models

    Science.gov (United States)

    Shusterman, Gwendolyn P.; Shusterman, Alan J.

    1997-07-01

    Linus Pauling once said that a topic must satisfy two criteria before it can be taught to students. First, students must be able to assimilate the topic within a reasonable amount of time. Second, the topic must be relevant to the educational needs and interests of the students. Unfortunately, the standard general chemistry textbook presentation of "electronic structure theory", set as it is in the language of molecular orbitals, has a difficult time satisfying either criterion. Many of the quantum mechanical aspects of molecular orbitals are too difficult for most beginning students to appreciate, much less master, and the few applications that are presented in the typical textbook are too limited in scope to excite much student interest. This article describes a powerful new method for teaching students about electronic structure and its relevance to chemical phenomena. This method, which we have developed and used for several years in general chemistry (G.P.S.) and organic chemistry (A.J.S.) courses, relies on computer-generated three-dimensional models of electron density distributions, and largely satisfies Pauling's two criteria. Students find electron density models easy to understand and use, and because these models are easily applied to a broad range of topics, they successfully convey to students the importance of electronic structure. In addition, when students finally learn about orbital concepts they are better prepared because they already have a well-developed three-dimensional picture of electronic structure to fall back on. We note in this regard that the types of models we use have found widespread, rigorous application in chemical research (1, 2), so students who understand and use electron density models do not need to "unlearn" anything before progressing to more advanced theories.

  10. Finite element model predicts current density distribution for clinical applications of tDCS and tACS

    Directory of Open Access Journals (Sweden)

    Toralf eNeuling

    2012-09-01

    Full Text Available Transcranial direct current stimulation (tDCS has been applied in numerous scientific studies over the past decade. However, the possibility to apply tDCS in therapy of neuropsychiatric disorders is still debated. While transcranial magnetic stimulation (TMS has been approved for treatment of major depression in the United States by the Food and Drug Administration (FDA, tDCS is not as widely accepted. One of the criticisms against tDCS is the lack of spatial specificity. Focality is limited by the electrode size (35 cm2 are commonly used and the bipolar arrangement. However, a current flow through the head directly from anode to cathode is an outdated view. Finite element (FE models have recently been used to predict the exact current flow during tDCS. These simulations have demonstrated that the current flow depends on tissue shape and conductivity. Toface the challenge to predict the location, magnitude and direction of the current flow induced by tDCS and transcranial alternating current stimulation (tACS, we used a refined realistic FE modeling approach. With respect to the literature on clinical tDCS and tACS, we analyzed two common setups for the location of the stimulation electrodes which target the frontal lobe and the occipital lobe, respectively. We compared lateral and medial electrode configuration with regard to theirusability. We were able to demonstrate that the lateral configurations yielded more focused stimulation areas as well as higher current intensities in the target areas. The high resolution of our simulation allows one to combine the modeled current flow with the knowledge of neuronal orientation to predict the consequences of tDCS and tACS. Our results not only offer a basis for a deeper understanding of the stimulation sites currently in use for clinical applications but also offer a better interpretation of observed effects.

  11. Representation and Validation of Liquid Densities for Pure Compounds and Mixtures

    DEFF Research Database (Denmark)

    O'Connell, J.; V. Dicky, V.; Abildskov, Jens

    Reliable correlation and prediction of liquid densities are important for designing chemical processes at normal and elevated pressures. We have extended a corresponding states model from molecular theory to yield a robust method for quality testing of experimental data that also provides predicted...... values at unmeasured conditions. The model has been shown to successfully validate and represent the pressure and temperature dependence of liquid densities greater than 1.5 of the critical density for pure compounds, binary mixtures, and ternary mixtures from the triple to critical temperatures...... at pressures up to 1000 MPa. The systems include the full range of organic compounds, including complex mixtures, and ionic liquids. Minimal data are required for making predictions.The presentation will show the implementation of the method, criteria for its deployment, examples of its application to a wide...

  12. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  13. A spatially-explicit count data regression for modeling the density of forest cockchafer (Melolontha hippocastani larvae in the Hessian Ried (Germany

    Directory of Open Access Journals (Sweden)

    Matthias Schmidt

    2014-10-01

    Full Text Available Background In this paper, a regression model for predicting the spatial distribution of forest cockchafer larvae in the Hessian Ried region (Germany is presented. The forest cockchafer, a native biotic pest, is a major cause of damage in forests in this region particularly during the regeneration phase. The model developed in this study is based on a systematic sample inventory of forest cockchafer larvae by excavation across the Hessian Ried. These forest cockchafer larvae data were characterized by excess zeros and overdispersion. Methods Using specific generalized additive regression models, different discrete distributions, including the Poisson, negative binomial and zero-inflated Poisson distributions, were compared. The methodology employed allowed the simultaneous estimation of non-linear model effects of causal covariates and, to account for spatial autocorrelation, of a 2-dimensional spatial trend function. In the validation of the models, both the Akaike information criterion (AIC and more detailed graphical procedures based on randomized quantile residuals were used. Results The negative binomial distribution was superior to the Poisson and the zero-inflated Poisson distributions, providing a near perfect fit to the data, which was proven in an extensive validation process. The causal predictors found to affect the density of larvae significantly were distance to water table and percentage of pure clay layer in the soil to a depth of 1 m. Model predictions showed that larva density increased with an increase in distance to the water table up to almost 4 m, after which it remained constant, and with a reduction in the percentage of pure clay layer. However this latter correlation was weak and requires further investigation. The 2-dimensional trend function indicated a strong spatial effect, and thus explained by far the highest proportion of variation in larva density. Conclusions As such the model can be used to support forest

  14. Molecular weight​/branching distribution modeling of low-​density-​polyethylene accounting for topological scission and combination termination in continuous stirred tank reactor

    NARCIS (Netherlands)

    Yaghini, N.; Iedema, P.D.

    2014-01-01

    We present a comprehensive model to predict the molecular weight distribution (MWD),(1) and branching distribution of low-density polyethylene (IdPE),(2) for free radical polymerization system in a continuous stirred tank reactor (CSTR).(3) The model accounts for branching, by branching moment or

  15. Probabilistic predictive modelling of carbon nanocomposites for medical implants design.

    Science.gov (United States)

    Chua, Matthew; Chui, Chee-Kong

    2015-04-01

    Modelling of the mechanical properties of carbon nanocomposites based on input variables like percentage weight of Carbon Nanotubes (CNT) inclusions is important for the design of medical implants and other structural scaffolds. Current constitutive models for the mechanical properties of nanocomposites may not predict well due to differences in conditions, fabrication techniques and inconsistencies in reagents properties used across industries and laboratories. Furthermore, the mechanical properties of the designed products are not deterministic, but exist as a probabilistic range. A predictive model based on a modified probabilistic surface response algorithm is proposed in this paper to address this issue. Tensile testing of three groups of different CNT weight fractions of carbon nanocomposite samples displays scattered stress-strain curves, with the instantaneous stresses assumed to vary according to a normal distribution at a specific strain. From the probabilistic density function of the experimental data, a two factors Central Composite Design (CCD) experimental matrix based on strain and CNT weight fraction input with their corresponding stress distribution was established. Monte Carlo simulation was carried out on this design matrix to generate a predictive probabilistic polynomial equation. The equation and method was subsequently validated with more tensile experiments and Finite Element (FE) studies. The method was subsequently demonstrated in the design of an artificial tracheal implant. Our algorithm provides an effective way to accurately model the mechanical properties in implants of various compositions based on experimental data of samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Approaching an experimental electron density model of the biologically active trans -epoxysuccinyl amide group-Substituent effects vs. crystal packing

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Ming W.; Stewart, Scott G.; Sobolev, Alexandre N.; Dittrich, Birger; Schirmeister, Tanja; Luger, Peter; Hesse, Malte; Chen, Yu-Sheng; Spackman, Peter R.; Spackman, Mark A.; Grabowsky, Simon (Heinrich-Heine); (Freie); (UC); (Bremen); (JG-UM); (UWA)

    2017-01-24

    The trans-epoxysuccinyl amide group as a biologically active moiety in cysteine protease inhibitors such as loxistatin acid E64c has been used as a benchmark system for theoretical studies of environmental effects on the electron density of small active ingredients in relation to their biological activity. Here, the synthesis and the electronic properties of the smallest possible active site model compound are reported to close the gap between the unknown experimental electron density of trans-epoxysuccinyl amides and the well-known function of related drugs. Intramolecular substituent effects are separated from intermolecular crystal packing effects on the electron density, which allows us to predict the conditions under which an experimental electron density investigation on trans-epoxysuccinyl amides will be possible. In this context, the special importance of the carboxylic acid function in the model compound for both crystal packing and biological activity is revealed through the novel tool of model energy analysis.

  17. The dynamics of variable-density turbulence

    International Nuclear Information System (INIS)

    Sandoval, D.L.

    1995-11-01

    The dynamics of variable-density turbulent fluids are studied by direct numerical simulation. The flow is incompressible so that acoustic waves are decoupled from the problem, and implying that density is not a thermodynamic variable. Changes in density occur due to molecular mixing. The velocity field, is in general, divergent. A pseudo-spectral numerical technique is used to solve the equations of motion. Three-dimensional simulations are performed using a grid size of 128 3 grid points. Two types of problems are studied: (1) the decay of isotropic, variable-density turbulence, and (2) buoyancy-generated turbulence in a fluid with large density fluctuations. In the case of isotropic, variable-density turbulence, the overall statistical decay behavior, for the cases studied, is relatively unaffected by the presence of density variations when the initial density and velocity fields are statistically independent. The results for this case are in quantitative agreement with previous numerical and laboratory results. In this case, the initial density field has a bimodal probability density function (pdf) which evolves in time towards a Gaussian distribution. The pdf of the density field is symmetric about its mean value throughout its evolution. If the initial velocity and density fields are statistically dependent, however, the decay process is significantly affected by the density fluctuations. For the case of buoyancy-generated turbulence, variable-density departures from the Boussinesq approximation are studied. The results of the buoyancy-generated turbulence are compared with variable-density model predictions. Both a one-point (engineering) model and a two-point (spectral) model are tested against the numerical data. Some deficiencies in these variable-density models are discussed and modifications are suggested

  18. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  19. PEDO-TRANSFER FUNCTIONS FOR ESTIMATING SOIL BULK DENSITY IN CENTRAL AMAZONIA

    Directory of Open Access Journals (Sweden)

    Henrique Seixas Barros

    2015-04-01

    Full Text Available Under field conditions in the Amazon forest, soil bulk density is difficult to measure. Rigorous methodological criteria must be applied to obtain reliable inventories of C stocks and soil nutrients, making this process expensive and sometimes unfeasible. This study aimed to generate models to estimate soil bulk density based on parameters that can be easily and reliably measured in the field and that are available in many soil-related inventories. Stepwise regression models to predict bulk density were developed using data on soil C content, clay content and pH in water from 140 permanent plots in terra firme (upland forests near Manaus, Amazonas State, Brazil. The model results were interpreted according to the coefficient of determination (R2 and Akaike information criterion (AIC and were validated with a dataset consisting of 125 plots different from those used to generate the models. The model with best performance in estimating soil bulk density under the conditions of this study included clay content and pH in water as independent variables and had R2 = 0.73 and AIC = -250.29. The performance of this model for predicting soil density was compared with that of models from the literature. The results showed that the locally calibrated equation was the most accurate for estimating soil bulk density for upland forests in the Manaus region.

  20. Density measurements of microsecond-conduction-time POS plasmas

    International Nuclear Information System (INIS)

    Hinshelwood, D.; Goodrich, P.J.; Weber, B.V.; Commisso, R.J.; Grossmann, J.M.; Kellogg, J.C.

    1993-01-01

    Measurements of the electron density in a coaxial microsecond conduction time plasma opening switch during switch operation are described. Current conduction is observed to cause a radial redistribution of the switch plasma. A local reduction in axial line density of more than an order of magnitude occurs by the time opening begins. This reduction, and the scaling of conduction current with plasma density, indicate that current conduction in this experiment is limited by hydrodynamic effects. It is hypothesized that the density reduction allows the switch to open by an erosion mechanism. Initial numerical modeling efforts have reproduced the principal observed results. A model that predicts accurately the conduction current is presented

  1. Modelling risk of tick exposure in southern Scandinavia using machine learning techniques, satellite imagery, and human population density maps

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    30 sites (forests and meadows) in each of Denmark, southern Norway and south-eastern Sweden. At each site we measured presence/absence of ticks, and used the data obtained along with environmental satellite images to run Boosted Regression Tree machine learning algorithms to predict overall spatial...... and Sweden), areas with high population densities tend to overlap with these zones.Machine learning techniques allow us to predict for larger areas without having to perform extensive sampling all over the region in question, and we were able to produce models and maps with high predictive value. The results...

  2. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  3. Predict-first experimental analysis using automated and integrated magnetohydrodynamic modeling

    Science.gov (United States)

    Lyons, B. C.; Paz-Soldan, C.; Meneghini, O.; Lao, L. L.; Weisberg, D. B.; Belli, E. A.; Evans, T. E.; Ferraro, N. M.; Snyder, P. B.

    2018-05-01

    An integrated-modeling workflow has been developed for the purpose of performing predict-first analysis of transient-stability experiments. Starting from an existing equilibrium reconstruction from a past experiment, the workflow couples together the EFIT Grad-Shafranov solver [L. Lao et al., Fusion Sci. Technol. 48, 968 (2005)], the EPED model for the pedestal structure [P. B. Snyder et al., Phys. Plasmas 16, 056118 (2009)], and the NEO drift-kinetic-equation solver [E. A. Belli and J. Candy, Plasma Phys. Controlled Fusion 54, 015015 (2012)] (for bootstrap current calculations) in order to generate equilibria with self-consistent pedestal structures as the plasma shape and various scalar parameters (e.g., normalized β, pedestal density, and edge safety factor [q95]) are changed. These equilibria are then analyzed using automated M3D-C1 extended-magnetohydrodynamic modeling [S. C. Jardin et al., Comput. Sci. Discovery 5, 014002 (2012)] to compute the plasma response to three-dimensional magnetic perturbations. This workflow was created in conjunction with a DIII-D experiment examining the effect of triangularity on the 3D plasma response. Several versions of the workflow were developed, and the initial ones were used to help guide experimental planning (e.g., determining the plasma current necessary to maintain the constant edge safety factor in various shapes). Subsequent validation with the experimental results was then used to revise the workflow, ultimately resulting in the complete model presented here. We show that quantitative agreement was achieved between the M3D-C1 plasma response calculated for equilibria generated by the final workflow and equilibria reconstructed from experimental data. A comparison of results from earlier workflows is used to show the importance of properly matching certain experimental parameters in the generated equilibria, including the normalized β, pedestal density, and q95. On the other hand, the details of the pedestal

  4. The equivalent thermal conductivity of lattice core sandwich structure: A predictive model

    International Nuclear Information System (INIS)

    Cheng, Xiangmeng; Wei, Kai; He, Rujie; Pei, Yongmao; Fang, Daining

    2016-01-01

    Highlights: • A predictive model of the equivalent thermal conductivity was established. • Both the heat conduction and radiation were considered. • The predictive results were in good agreement with experiment and FEM. • Some methods for improving the thermal protection performance were proposed. - Abstract: The equivalent thermal conductivity of lattice core sandwich structure was predicted using a novel model. The predictive results were in good agreement with experimental and Finite Element Method results. The thermal conductivity of the lattice core sandwich structure was attributed to both core conduction and radiation. The core conduction caused thermal conductivity only relied on the relative density of the structure. And the radiation caused thermal conductivity increased linearly with the thickness of the core. It was found that the equivalent thermal conductivity of the lattice core sandwich structure showed a highly dependent relationship on temperature. At low temperatures, the structure exhibited a nearly thermal insulated behavior. With the temperature increasing, the thermal conductivity of the structure increased owing to radiation. Therefore, some attempts, such as reducing the emissivity of the core or designing multilayered structure, are believe to be of benefit for improving the thermal protection performance of the structure at high temperatures.

  5. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  6. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    Science.gov (United States)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  7. Level densities in nuclear physics

    International Nuclear Information System (INIS)

    Beckerman, M.

    1978-01-01

    In the independent-particle model nucleons move independently in a central potential. There is a well-defined set of single- particle orbitals, each nucleon occupies one of these orbitals subject to Fermi statistics, and the total energy of the nucleus is equal to the sum of the energies of the individual nucleons. The basic question is the range of validity of this Fermi gas description and, in particular, the roles of the residual interactions and collective modes. A detailed examination of experimental level densities in light-mass system is given to provide some insight into these questions. Level densities over the first 10 MeV or so in excitation energy as deduced from neutron and proton resonances data and from spectra of low-lying bound levels are discussed. To exhibit some of the salient features of these data comparisons to independent-particle (shell) model calculations are presented. Shell structure is predicted to manifest itself through discontinuities in the single-particle level density at the Fermi energy and through variatons in the occupancy of the valence orbitals. These predictions are examined through combinatorial calculations performed with the Grover [Phys. Rev., 157, 832(1967), 185 1303(1969)] odometer method. Before the discussion of the experimenta results, statistical mechanical level densities for spherical nuclei are reviewed. After consideration of deformed nuclei, the conclusions resulting from this work are drawn. 7 figures, 3 tables

  8. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Novel modeling of combinatorial miRNA targeting identifies SNP with potential role in bone density.

    Directory of Open Access Journals (Sweden)

    Claudia Coronnello

    Full Text Available MicroRNAs (miRNAs are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting, a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential

  10. Using Tree Detection Algorithms to Predict Stand Sapwood Area, Basal Area and Stocking Density in Eucalyptus regnans Forest

    Directory of Open Access Journals (Sweden)

    Dominik Jaskierniak

    2015-06-01

    Full Text Available Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR data with as few as 0.9 pulses m−2, we applied a local maximum filtering (LMF method and normalised cut (NCut algorithm to predict stocking density (SDen of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa per hectare and sapwood area (SAHa per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37 to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56 was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50 or a sapwood perimeter index, defined as (BAHaSDen½ (R2 = 0.48.

  11. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  12. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  13. High accuracy satellite drag model (HASDM)

    Science.gov (United States)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  14. Predicting the weathering of fuel and oil spills: A diffusion-limited evaporation model.

    Science.gov (United States)

    Kotzakoulakis, Konstantinos; George, Simon C

    2018-01-01

    The majority of the evaporation models currently available in the literature for the prediction of oil spill weathering do not take into account diffusion-limited mass transport and the formation of a concentration gradient in the oil phase. The altered surface concentration of the spill caused by diffusion-limited transport leads to a slower evaporation rate compared to the predictions of diffusion-agnostic evaporation models. The model presented in this study incorporates a diffusive layer in the oil phase and predicts the diffusion-limited evaporation rate. The information required is the composition of the fluid from gas chromatography or alternatively the distillation data. If the density or a single viscosity measurement is available the accuracy of the predictions is higher. Environmental conditions such as water temperature, air pressure and wind velocity are taken into account. The model was tested with synthetic mixtures, petroleum fuels and crude oils with initial viscosities ranging from 2 to 13,000 cSt. The tested temperatures varied from 0 °C to 23.4 °C and wind velocities from 0.3 to 3.8 m/s. The average absolute deviation (AAD) of the diffusion-limited model ranged between 1.62% and 24.87%. In comparison, the AAD of a diffusion-agnostic model ranged between 2.34% and 136.62% against the same tested fluids. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    Science.gov (United States)

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  16. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  17. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values

    OpenAIRE

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    Objective: The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. Materials and Methods: CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were g...

  18. Predictions from a flavour GUT model combined with a SUSY breaking sector

    Science.gov (United States)

    Antusch, Stefan; Hohl, Christian

    2017-10-01

    We discuss how flavour GUT models in the context of supergravity can be completed with a simple SUSY breaking sector, such that the flavour-dependent (non-universal) soft breaking terms can be calculated. As an example, we discuss a model based on an SU(5) GUT symmetry and A 4 family symmetry, plus additional discrete "shaping symmetries" and a ℤ 4 R symmetry. We calculate the soft terms and identify the relevant high scale input parameters, and investigate the resulting predictions for the low scale observables, such as flavour violating processes, the sparticle spectrum and the dark matter relic density.

  19. Non-local energy density functionals: models plus some exact general results

    International Nuclear Information System (INIS)

    March, N.H.

    2001-02-01

    Holas and March (Phys. Rev. A51, 2040, 1995) gave a formally exact expression for the force - δV xc (r-tilde)/δr-tilde associated with the exchange-correlation potential V xc (r-tilde) of density functional theory. This forged a precise link between first- and second-order density matrices and V xc (r-tilde). Here models are presented in which these low-order matrices can be related to the ground-state electron density. This allows non-local energy density functionals to be constructed within the framework of such models. Finally, results emerging from these models have led to the derivation of some exact 'nuclear cusp' relations for exchange and correlation energy densities in molecules, clusters and condensed phases. (author)

  20. and density-dependent quark mass model

    Indian Academy of Sciences (India)

    Since a fair proportion of such dense proto stars are likely to be ... the temperature- and density-dependent quark mass (TDDQM) model which we had em- ployed in .... instead of Tc ~170 MeV which is a favoured value for the ud matter [26].

  1. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  2. Prediction of nanofluids properties: the density and the heat capacity

    Science.gov (United States)

    Zhelezny, V. P.; Motovoy, I. V.; Ustyuzhanin, E. E.

    2017-11-01

    The results given in this report show that the additives of Al2O3 nanoparticles lead to increase the density and decrease the heat capacity of isopropanol. Based on the experimental data the excess molar volume and the excess molar heat capacity were calculated. The report suggests new method for predicting the molar volume and molar heat capacity of nanofluids. It is established that the values of the excess thermodynamic functions are determined by the properties and the volume of the structurally oriented layers of the base fluid molecules near the surface of nanoparticles. The heat capacity of the structurally oriented layers of the base fluid is less than the heat capacity of the base fluid for given parameters due to the greater regulation of its structure. It is shown that information on the geometric dimensions of the structured layers of the base fluid near nanoparticles can be obtained from data on the nanofluids density and at ambient temperature - by the dynamic light scattering method. For calculations of the nanofluids heat capacity over a wide range of temperatures a new correlation based on the extended scaling is proposed.

  3. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  4. Forward modeling of gravity data using geostatistically generated subsurface density variations

    Science.gov (United States)

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  5. A state-space modeling approach to estimating canopy conductance and associated uncertainties from sap flux density data.

    Science.gov (United States)

    Bell, David M; Ward, Eric J; Oishi, A Christopher; Oren, Ram; Flikkema, Paul G; Clark, James S

    2015-07-01

    Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as canopy conductance and transpiration. To address this need, we developed a hierarchical Bayesian State-Space Canopy Conductance (StaCC) model linking canopy conductance and transpiration to tree sap flux density from a 4-year experiment in the North Carolina Piedmont, USA. Our model builds on existing ecophysiological knowledge, but explicitly incorporates uncertainty in canopy conductance, internal tree hydraulics and observation error to improve estimation of canopy conductance responses to atmospheric drought (i.e., vapor pressure deficit), soil drought (i.e., soil moisture) and above canopy light. Our statistical framework not only predicted sap flux observations well, but it also allowed us to simultaneously gap-fill missing data as we made inference on canopy processes, marking a substantial advance over traditional methods. The predicted and observed sap flux data were highly correlated (mean sensor-level Pearson correlation coefficient = 0.88). Variations in canopy conductance and transpiration associated with environmental variation across days to years were many times greater than the variation associated with model uncertainties. Because some variables, such as vapor pressure deficit and soil moisture, were correlated at the scale of days to weeks, canopy conductance responses to individual environmental variables were difficult to interpret in isolation. Still, our results highlight the importance of accounting for uncertainty in models of ecophysiological and ecosystem function where the process of interest, canopy conductance in this case, is not observed directly. The StaCC modeling

  6. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  7. Local relative density modulates failure and strength in vertically aligned carbon nanotubes.

    Science.gov (United States)

    Pathak, Siddhartha; Mohan, Nisha; Decolvenaere, Elizabeth; Needleman, Alan; Bedewy, Mostafa; Hart, A John; Greer, Julia R

    2013-10-22

    Micromechanical experiments, image analysis, and theoretical modeling revealed that local failure events and compressive stresses of vertically aligned carbon nanotubes (VACNTs) were uniquely linked to relative density gradients. Edge detection analysis of systematically obtained scanning electron micrographs was used to quantify a microstructural figure-of-merit related to relative local density along VACNT heights. Sequential bottom-to-top buckling and hardening in stress-strain response were observed in samples with smaller relative density at the bottom. When density gradient was insubstantial or reversed, bottom regions always buckled last, and a flat stress plateau was obtained. These findings were consistent with predictions of a 2D material model based on a viscoplastic solid with plastic non-normality and a hardening-softening-hardening plastic flow relation. The hardening slope in compression generated by the model was directly related to the stiffness gradient along the sample height, and hence to the local relative density. These results demonstrate that a microstructural figure-of-merit, the effective relative density, can be used to quantify and predict the mechanical response.

  8. A maximum entropy model for predicting wild boar distribution in Spain

    Directory of Open Access Journals (Sweden)

    Jaime Bosch

    2014-09-01

    Full Text Available Wild boar (Sus scrofa populations in many areas of the Palearctic including the Iberian Peninsula have grown continuously over the last century. This increase has led to numerous different types of conflicts due to the damage these mammals can cause to agriculture, the problems they create in the conservation of natural areas, and the threat they pose to animal health. In the context of both wildlife management and the design of health programs for disease control, it is essential to know how wild boar are distributed on a large spatial scale. Given that the quantifying of the distribution of wild species using census techniques is virtually impossible in the case of large-scale studies, modeling techniques have thus to be used instead to estimate animals’ distributions, densities, and abundances. In this study, the potential distribution of wild boar in Spain was predicted by integrating data of presence and environmental variables into a MaxEnt approach. We built and tested models using 100 bootstrapped replicates. For each replicate or simulation, presence data was divided into two subsets that were used for model fitting (60% of the data and cross-validation (40% of the data. The final model was found to be accurate with an area under the receiver operating characteristic curve (AUC value of 0.79. Six explanatory variables for predicting wild boar distribution were identified on the basis of the percentage of their contribution to the model. The model exhibited a high degree of predictive accuracy, which has been confirmed by its agreement with satellite images and field surveys.

  9. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  10. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  11. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  12. Radiomic modeling of BI-RADS density categories

    Science.gov (United States)

    Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir

    2017-03-01

    Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.

  13. Modelling CO2-Brine Interfacial Tension using Density Gradient Theory

    KAUST Repository

    Ruslan, Mohd Fuad Anwari Che

    2018-03-01

    Knowledge regarding carbon dioxide (CO2)-brine interfacial tension (IFT) is important for petroleum industry and Carbon Capture and Storage (CCS) strategies. In petroleum industry, CO2-brine IFT is especially importance for CO2 – based enhanced oil recovery strategy as it affects phase behavior and fluid transport in porous media. CCS which involves storing CO2 in geological storage sites also requires understanding regarding CO2-brine IFT as this parameter affects CO2 quantity that could be securely stored in the storage site. Several methods have been used to compute CO2-brine interfacial tension. One of the methods employed is by using Density Gradient Theory (DGT) approach. In DGT model, IFT is computed based on the component density distribution across the interface. However, current model is only applicable for modelling low to medium ionic strength solution. This limitation is due to the model only considers the increase of IFT due to the changes of bulk phases properties and does not account for ion distribution at interface. In this study, a new modelling strategy to compute CO2-brine IFT based on DGT was proposed. In the proposed model, ion distribution across interface was accounted for by separating the interface to two sections. The saddle point of tangent plane distance where ( ) was defined as the boundary separating the two sections of the interface. Electrolyte is assumed to be present only in the second section which is connected to the bulk liquid phase side. Numerical simulations were performed using the proposed approach for single and mixed salt solutions for three salts (NaCl, KCl, and CaCl2), for temperature (298 K to 443 K), pressure (2 MPa to 70 MPa), and ionic strength (0.085 mol·kg-1 to 15 mol·kg-1). The simulation result shows that the tuned model was able to predict with good accuracy CO2-brine IFT for all studied cases. Comparison with current DGT model showed that the proposed approach yields better match with the experiment data

  14. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    Science.gov (United States)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  15. Color-flavor locked strange quark matter in a mass density-dependent model

    International Nuclear Information System (INIS)

    Chen Yuede; Wen Xinjian

    2007-01-01

    Properties of color-flavor locked (CFL) strange quark matter have been studied in a mass-density-dependent model, and compared with the results in the conventional bag model. In both models, the CFL phase is more stable than the normal nuclear matter for reasonable parameters. However, the lower density behavior of the sound velocity in this model is completely opposite to that in the bag model, which makes the maximum mass of CFL quark stars in the mass-density-dependent model larger than that in the bag model. (authors)

  16. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    International Nuclear Information System (INIS)

    Koch, J.; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs

  17. Estimation of uncertainties in predictions of environmental transfer models: evaluation of methods and application to CHERPAC

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.

    1995-10-01

    Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.

  18. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  19. Damping of Resonantly Forced Density Waves in Dense Planetary Rings

    Science.gov (United States)

    Lehmann, Marius; Schmidt, Jürgen; Salo, Heikki

    2016-10-01

    We address the stability of resonantly forced density waves in dense planetary rings.Already by Goldreich and Tremaine (1978) it has been argued that density waves might be unstable, depending on the relationship between the ring's viscosity and the surface mass density. In the recent paper (Schmidt et al. 2016) we have pointed out that when - within a fluid description of the ring dynamics - the criterion for viscous overstability is satisfied, forced spiral density waves become unstable as well. In this case, linear theory fails to describe the damping.We apply the multiple scale formalism to derive a weakly nonlinear damping relation from a hydrodynamical model.This relation describes the resonant excitation and nonlinear viscous damping of spiral density waves in a vertically integrated fluid disk with density dependent transport coefficients. The model consistently predicts linear instability of density waves in a ring region where the conditions for viscous overstability are met. In this case, sufficiently far away from the Lindblad resonance, the surface mass density perturbation is predicted to saturate to a constant value due to nonlinear viscous damping. In general the model wave damping lengths depend on a set of input parameters, such as the distance to the threshold for viscous overstability and the ground state surface mass density.Our new model compares reasonably well with the streamline model for nonlinear density waves of Borderies et al. 1986.Deviations become substantial in the highly nonlinear regime, corresponding to strong satellite forcing.Nevertheless, we generally observe good or at least qualitative agreement between the wave amplitude profiles of both models. The streamline approach is superior at matching the total wave profile of waves observed in Saturn's rings, while our new damping relation is a comparably handy tool to gain insight in the evolution of the wave amplitude with distance from resonance, and the different regimes of

  20. Solar radio proxies for improved satellite orbit prediction

    Science.gov (United States)

    Yaya, Philippe; Hecker, Louis; Dudok de Wit, Thierry; Fèvre, Clémence Le; Bruinsma, Sean

    2017-12-01

    Specification and forecasting of solar drivers to thermosphere density models is critical for satellite orbit prediction and debris avoidance. Satellite operators routinely forecast orbits up to 30 days into the future. This requires forecasts of the drivers to these orbit prediction models such as the solar Extreme-UV (EUV) flux and geomagnetic activity. Most density models use the 10.7 cm radio flux (F10.7 index) as a proxy for solar EUV. However, daily measurements at other centimetric wavelengths have also been performed by the Nobeyama Radio Observatory (Japan) since the 1950's, thereby offering prospects for improving orbit modeling. Here we present a pre-operational service at the Collecte Localisation Satellites company that collects these different observations in one single homogeneous dataset and provides a 30 days forecast on a daily basis. Interpolation and preprocessing algorithms were developed to fill in missing data and remove anomalous values. We compared various empirical time series prediction techniques and selected a multi-wavelength non-recursive analogue neural network. The prediction of the 30 cm flux, and to a lesser extent that of the 10.7 cm flux, performs better than NOAA's present prediction of the 10.7 cm flux, especially during periods of high solar activity. In addition, we find that the DTM-2013 density model (Drag Temperature Model) performs better with (past and predicted) values of the 30 cm radio flux than with the 10.7 flux.

  1. Solar radio proxies for improved satellite orbit prediction

    Directory of Open Access Journals (Sweden)

    Yaya Philippe

    2017-01-01

    Full Text Available Specification and forecasting of solar drivers to thermosphere density models is critical for satellite orbit prediction and debris avoidance. Satellite operators routinely forecast orbits up to 30 days into the future. This requires forecasts of the drivers to these orbit prediction models such as the solar Extreme-UV (EUV flux and geomagnetic activity. Most density models use the 10.7 cm radio flux (F10.7 index as a proxy for solar EUV. However, daily measurements at other centimetric wavelengths have also been performed by the Nobeyama Radio Observatory (Japan since the 1950's, thereby offering prospects for improving orbit modeling. Here we present a pre-operational service at the Collecte Localisation Satellites company that collects these different observations in one single homogeneous dataset and provides a 30 days forecast on a daily basis. Interpolation and preprocessing algorithms were developed to fill in missing data and remove anomalous values. We compared various empirical time series prediction techniques and selected a multi-wavelength non-recursive analogue neural network. The prediction of the 30 cm flux, and to a lesser extent that of the 10.7 cm flux, performs better than NOAA's present prediction of the 10.7 cm flux, especially during periods of high solar activity. In addition, we find that the DTM-2013 density model (Drag Temperature Model performs better with (past and predicted values of the 30 cm radio flux than with the 10.7 flux.

  2. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  3. Compatible growth models and stand density diagrams

    International Nuclear Information System (INIS)

    Smith, N.J.; Brand, D.G.

    1988-01-01

    This paper discusses a stand average growth model based on the self-thinning rule developed and used to generate stand density diagrams. Procedures involved in testing are described and results are included

  4. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  5. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  6. Re-establishing the pecking order: Niche models reliably predict suitable habitats for the reintroduction of red-billed oxpeckers.

    Science.gov (United States)

    Kalle, Riddhika; Combrink, Leigh; Ramesh, Tharmalingam; Downs, Colleen T

    2017-03-01

    Distributions of avian mutualists are affected by changes in biotic interactions and environmental conditions driven directly/indirectly by human actions. The range contraction of red-billed oxpeckers ( Buphagus erythrorhynchus ) in South Africa is partly a result of the widespread use of acaracides (i.e., mainly cattle dips), toxic to both ticks and oxpeckers. We predicted the habitat suitability of red-billed oxpeckers in South Africa using ensemble models to assist the ongoing reintroduction efforts and to identify new reintroduction sites for population recovery. The distribution of red-billed oxpeckers was influenced by moderate to high tree cover, woodland habitats, and starling density (a proxy for cavity-nesting birds) with regard to nest-site characteristics. Consumable resources (host and tick density), bioclimate, surface water body density, and proximity to protected areas were other influential predictors. Our models estimated 42,576.88-98,506.98 km 2 of highly suitable habitat (0.5-1) covering the majority of Limpopo, Mpumalanga, North West, a substantial portion of northern KwaZulu-Natal (KZN) and the Gauteng Province. Niche models reliably predicted suitable habitat in 40%-61% of the reintroduction sites where breeding is currently successful. Ensemble, boosted regression trees and generalized additive models predicted few suitable areas in the Eastern Cape and south of KZN that are part of the historic range. A few southern areas in the Northern Cape, outside the historic range, also had suitable sites predicted. Our models are a promising decision support tool for guiding reintroduction programs at macroscales. Apart from active reintroductions, conservation programs should encourage farmers and/or landowners to use oxpecker-compatible agrochemicals and set up adequate nest boxes to facilitate the population recovery of the red-billed oxpecker, particularly in human-modified landscapes. To ensure long-term conservation success, we suggest that

  7. Inverse modeling with RZWQM2 to predict water quality

    Science.gov (United States)

    Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.

    2011-01-01

    reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.

  8. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  9. Developing and validating a new precise risk-prediction model for new-onset hypertension: The Jichi Genki hypertension prediction model (JG model).

    Science.gov (United States)

    Kanegae, Hiroshi; Oikawa, Takamitsu; Suzuki, Kenji; Okawara, Yukie; Kario, Kazuomi

    2018-03-31

    No integrated risk assessment tools that include lifestyle factors and uric acid have been developed. In accordance with the Industrial Safety and Health Law in Japan, a follow-up examination of 63 495 normotensive individuals (mean age 42.8 years) who underwent a health checkup in 2010 was conducted every year for 5 years. The primary endpoint was new-onset hypertension (systolic blood pressure [SBP]/diastolic blood pressure [DBP] ≥ 140/90 mm Hg and/or the initiation of antihypertensive medications with self-reported hypertension). During the mean 3.4 years of follow-up, 7402 participants (11.7%) developed hypertension. The prediction model included age, sex, body mass index (BMI), SBP, DBP, low-density lipoprotein cholesterol, uric acid, proteinuria, current smoking, alcohol intake, eating rate, DBP by age, and BMI by age at baseline and was created by using Cox proportional hazards models to calculate 3-year absolute risks. The derivation analysis confirmed that the model performed well both with respect to discrimination and calibration (n = 63 495; C-statistic = 0.885, 95% confidence interval [CI], 0.865-0.903; χ 2 statistic = 13.6, degree of freedom [df] = 7). In the external validation analysis, moreover, the model performed well both in its discrimination and calibration characteristics (n = 14 168; C-statistic = 0.846; 95%CI, 0.775-0.905; χ 2 statistic = 8.7, df = 7). Adding LDL cholesterol, uric acid, proteinuria, alcohol intake, eating rate, and BMI by age to the base model yielded a significantly higher C-statistic, net reclassification improvement (NRI), and integrated discrimination improvement, especially NRI non-event (NRI = 0.127, 95%CI = 0.100-0.152; NRI non-event  = 0.108, 95%CI = 0.102-0.117). In conclusion, a highly precise model with good performance was developed for predicting incident hypertension using the new parameters of eating rate, uric acid, proteinuria, and BMI by age. ©2018 Wiley Periodicals, Inc.

  10. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Jacques [Univ. of Regensburg (Germany). Inst. for Theorectical Physics; Glesaan, Jonas [Swansea Univ., Swansea U.K.; Verbaarschot, Jacobus [Stony Brook Univ., NY (United States). Dept. of Physics and Astronomy; Zafeiropoulos, Savvas [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Heidelberg Univ. (Germany). Inst. for Theoretische Physik

    2018-04-01

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.

  11. Predictable topography simulation of SiO2 etching by C5F8 gas combined with a plasma simulation, sheath model and chemical reaction model

    International Nuclear Information System (INIS)

    Takagi, S; Onoue, S; Iyanagi, K; Nishitani, K; Shinmura, T; Kanoh, M; Itoh, H; Shioyama, Y; Akiyama, T; Kishigami, D

    2003-01-01

    We have developed a simulation for predicting reactive ion etching (RIE) topography, which is a combination of plasma simulation, the gas reaction model, the sheath model and the surface reaction model. The simulation is applied to the SiO 2 etching process of a high-aspect-ratio contact hole using C 5 F 8 gas. A capacitively coupled plasma (CCP) reactor of an 8-in. wafer was used in the etching experiments. The baseline conditions are RF power of 1500 W and gas pressure of 4.0 Pa in a gas mixture of Ar, O 2 and C 5 F 8 . The plasma simulation reproduces the tendency that CF 2 radical density increases rapidly and the electron density decreases gradually with increasing gas flow rate of C 5 F 8 . In the RIE topography simulation, the etching profiles such as bowing and taper shape at the bottom are reproduced in deep holes with aspect ratios greater than 19. Moreover, the etching profile, the dependence of the etch depth on the etching time, and the bottom diameter can be predicted by this simulation

  12. Online traffic flow model applying dynamic flow-density relation

    International Nuclear Information System (INIS)

    Kim, Y.

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic flow is simplified and classified into three traffic states depending on the propagation of congestion. The traffic states are represented on a phase diagram with the upstream demand axis and the interaction strength axis which was defined in this research. The states diagram and the phase diagram provide a basis for the development of the dynamic flow-density relation. The first-order hydrodynamic traffic flow model was programmed according to the cell-transmission scheme extended by the modification of flow dependent sending/receiving functions, the classification of cells and the determination strategy for the flow-density relation in the cells. The unreasonable results of macroscopic traffic flow models, which may occur in the first and last cells in certain conditions are alleviated by applying buffer cells between the traffic data and the model. The sending/receiving functions of the cells are determined dynamically based on the classification of the

  13. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  14. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  15. Density limit studies on DIII-D

    International Nuclear Information System (INIS)

    Maingi, R.; Mahdavi, M.A.; Petrie, T.W.

    1998-08-01

    The authors have studied the processes limiting plasma density and successfully achieved discharges with density ∼50% above the empirical Greenwald density limit with H-mode confinement. This was accomplished by density profile control, enabled through pellet injection and divertor pumping. By examining carefully the criterion for MARFE formation, the authors have derived an edge density limit with scaling very similar to Greenwald scaling. Finally, they have looked in detail at the first and most common density limit process in DIII-D, total divertor detachment, and found that the local upstream separatrix density (n e sep,det ) at detachment onset (partial detachment) increases with the scrape-off layer heating power, P heat , i.e., n e sep,det ∼ P heat 0.76 . This is in marked contrast to the line-average density at detachment which is insensitive to the heating power. The data are in reasonable agreement with the Borass model, which predicted that the upstream density at detachment would increase as P heat 0.7

  16. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  17. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    International Nuclear Information System (INIS)

    Ha, Sang Jun; No, Hee Cheon

    1997-01-01

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variation in pressure, tube diameter and length, mass flux and inlet subcooling

  18. Postfragmentation density function for bacterial aggregates in laminar flow.

    Science.gov (United States)

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  19. Density dependence, density independence, and recruitment in the American shad (Alosa sapidissima) population of the Connecticut River

    International Nuclear Information System (INIS)

    Leggett, W.C.

    1977-01-01

    The role of density-dependent and density-independent factors in the regulation of the stock-recruitment relationship of the American shad (Alosa sapidissima) population of the Connecticut River was investigated. Significant reductions in egg-to-adult survival and juvenile growth rates occurred in the Holyoke--Turners Falls region in response to increases in the intensity of spawning in this area. For the Connecticut River population as a whole, egg-to-adult survival was estimated to be 0.00056 percent at replacement levels, and 0.00083 percent at the point of maximum population growth. Density-independent factors result in significant annual deviations from recruitment levels predicted by the density-dependent model. Temperature and flow regimes during spawning and early larval development are involved, but they explain only a small portion (less than 16 percent) of the total variation. In spite of an extensive data base, the accuracy of predictions concerning the potential effects of additional mortality to pre-recruit stages is low. The implications of these findings for environmental impact assessment are discussed

  20. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  1. Habitat features and predictive habitat modeling for the Colorado chipmunk in southern New Mexico

    Science.gov (United States)

    Rivieccio, M.; Thompson, B.C.; Gould, W.R.; Boykin, K.G.

    2003-01-01

    Two subspecies of Colorado chipmunk (state threatened and federal species of concern) occur in southern New Mexico: Tamias quadrivittatus australis in the Organ Mountains and T. q. oscuraensis in the Oscura Mountains. We developed a GIS model of potentially suitable habitat based on vegetation and elevation features, evaluated site classifications of the GIS model, and determined vegetation and terrain features associated with chipmunk occurrence. We compared GIS model classifications with actual vegetation and elevation features measured at 37 sites. At 60 sites we measured 18 habitat variables regarding slope, aspect, tree species, shrub species, and ground cover. We used logistic regression to analyze habitat variables associated with chipmunk presence/absence. All (100%) 37 sample sites (28 predicted suitable, 9 predicted unsuitable) were classified correctly by the GIS model regarding elevation and vegetation. For 28 sites predicted suitable by the GIS model, 18 sites (64%) appeared visually suitable based on habitat variables selected from logistic regression analyses, of which 10 sites (36%) were specifically predicted as suitable habitat via logistic regression. We detected chipmunks at 70% of sites deemed suitable via the logistic regression models. Shrub cover, tree density, plant proximity, presence of logs, and presence of rock outcrop were retained in the logistic model for the Oscura Mountains; litter, shrub cover, and grass cover were retained in the logistic model for the Organ Mountains. Evaluation of predictive models illustrates the need for multi-stage analyses to best judge performance. Microhabitat analyses indicate prospective needs for different management strategies between the subspecies. Sensitivities of each population of the Colorado chipmunk to natural and prescribed fire suggest that partial burnings of areas inhabited by Colorado chipmunks in southern New Mexico may be beneficial. These partial burnings may later help avoid a fire

  2. Extension of the Nambu-Jona-Lasinio model predictions at high temperatures and strong external magnetic field

    International Nuclear Information System (INIS)

    Gomes, Karina P.; Farias, R.L.S.; Pinto, M.B.; Krein, G.

    2013-01-01

    Full text: Recently much attention is dedicated to understand the effects of an external magnetic field on the QCD phase diagram. Actually there is a contradiction in the literature: while effective models of QCD like the Nambu-Jona- Lasinio model (NJL) and linear sigma model predict an increase of the critical temperature of chiral symmetry restoration a function of the magnetic field, recent lattice results shows the opposite behavior. The NJL model is nonrenormalizable; then the high momentum part of the model has to be regularized in a phenomenological way. The common practice is to regularize the divergent loop amplitudes with a three-dimensional momentum cutoff, which also sets the energy-momentum scale for the validity of the model. That is, the model cannot be used for studying phenomena involving momenta running in loops larger than the cutoff. In particular, the model cannot be used to study quark matter at high densities. One of the symptoms of this problem is the prediction of vanishing superconducting gaps at high baryon densities, a feature of the model that is solely caused by the use of a regularizing momentum cutoff of the divergent vacuum and also in finite loop integrals. In a renormalizable theory all the dependence on the cutoff can be removed in favor of running physical parameters, like the coupling constants of QED and QCD. The running is given by the renormalization group equations of the theory and is controlled by an energy scale that is adjusted to the scale of the experimental conditions under consideration. In a recent publication, Casalbuoni et al. have introduced the concept of a running coupling constant for the NJL model to extend the applicability of the model to high density. Their arguments are based on making the cutoff density dependent, using an analogy with the natural cutoff of the Debye frequency of phonon oscillations in an ordinary solid. In the present work we follow such an approach introducing a magnetic field

  3. A density functional theory based approach for predicting melting points of ionic liquids.

    Science.gov (United States)

    Chen, Lihua; Bryantsev, Vyacheslav S

    2017-02-01

    Accurate prediction of melting points of ILs is important both from the fundamental point of view and from the practical perspective for screening ILs with low melting points and broadening their utilization in a wider temperature range. In this work, we present an ab initio approach to calculate melting points of ILs with known crystal structures and illustrate its application for a series of 11 ILs containing imidazolium/pyrrolidinium cations and halide/polyatomic fluoro-containing anions. The melting point is determined as a temperature at which the Gibbs free energy of fusion is zero. The Gibbs free energy of fusion can be expressed through the use of the Born-Fajans-Haber cycle via the lattice free energy of forming a solid IL from gaseous phase ions and the sum of the solvation free energies of ions comprising IL. Dispersion-corrected density functional theory (DFT) involving (semi)local (PBE-D3) and hybrid exchange-correlation (HSE06-D3) functionals is applied to estimate the lattice enthalpy, entropy, and free energy. The ions solvation free energies are calculated with the SMD-generic-IL solvation model at the M06-2X/6-31+G(d) level of theory under standard conditions. The melting points of ILs computed with the HSE06-D3 functional are in good agreement with the experimental data, with a mean absolute error of 30.5 K and a mean relative error of 8.5%. The model is capable of accurately reproducing the trends in melting points upon variation of alkyl substituents in organic cations and replacement one anion by another. The results verify that the lattice energies of ILs containing polyatomic fluoro-containing anions can be approximated reasonably well using the volume-based thermodynamic approach. However, there is no correlation of the computed lattice energies with molecular volume for ILs containing halide anions. Moreover, entropies of solid ILs follow two different linear relationships with molecular volume for halides and polyatomic fluoro

  4. Developing Inventory Projection Models Using Empirical Net Forest Growth and Growing-Stock Density Relationships Across U.S. Regions and Species Group

    Science.gov (United States)

    Prakash Nepal; Peter J. Ince; Kenneth E. Skog; Sun J. Chang

    2012-01-01

    This paper describes a set of empirical net forest growth models based on forest growing-stock density relationships for three U.S. regions (North, South, and West) and two species groups (softwoods and hardwoods) at the regional aggregate level. The growth models accurately predict historical U.S. timber inventory trends when we incorporate historical timber harvests...

  5. Analytical thermal modelling of multilayered active embedded chips into high density electronic board

    Directory of Open Access Journals (Sweden)

    Monier-Vinard Eric

    2013-01-01

    Full Text Available The recent Printed Wiring Board embedding technology is an attractive packaging alternative that allows a very high degree of miniaturization by stacking multiple layers of embedded chips. This disruptive technology will further increase the thermal management challenges by concentrating heat dissipation at the heart of the organic substrate structure. In order to allow the electronic designer to early analyze the limits of the power dissipation, depending on the embedded chip location inside the board, as well as the thermal interactions with other buried chips or surface mounted electronic components, an analytical thermal modelling approach was established. The presented work describes the comparison of the analytical model results with the numerical models of various embedded chips configurations. The thermal behaviour predictions of the analytical model, found to be within ±10% of relative error, demonstrate its relevance for modelling high density electronic board. Besides the approach promotes a practical solution to study the potential gain to conduct a part of heat flow from the components towards a set of localized cooled board pads.

  6. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Xiaohua; Zhang, Xiaoli; Wang, Xianlong [Key Laboratory of Materials Physics, Institute of Solid State Physics, Chinese Academy of Sciences, Hefei 230031 (China); Zeng, Zhi, E-mail: zzeng@theory.issp.ac.cn [Key Laboratory of Materials Physics, Institute of Solid State Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China)

    2016-04-15

    The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene) has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs) based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW) caused by electron-electron interaction and charge density wave (CDW) caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1) comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2) comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3) SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  7. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Directory of Open Access Journals (Sweden)

    Xiaohua Wu

    2016-04-01

    Full Text Available The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW caused by electron-electron interaction and charge density wave (CDW caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1 comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2 comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3 SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  8. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  9. A hierarchical model for estimating density in camera-trap studies

    Science.gov (United States)

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  10. Model-based Optimization and Feedback Control of the Current Density Profile Evolution in NSTX-U

    Science.gov (United States)

    Ilhan, Zeki Okan

    trajectories and analyzing the resulting plasma evolution. Finally, the proposed control-oriented model is embedded in feedback control schemes based on optimal control and Model Predictive Control (MPC) approaches. Integrators are added to the standard Linear Quadratic Gaussian (LQG) and MPC formulations to provide robustness against various modeling uncertainties and external disturbances. The effectiveness of the proposed feedback controllers in regulating the current density profile in NSTX-U is demonstrated in closed-loop nonlinear simulations. Moreover, the optimal feedback control algorithm has been implemented successfully in closed-loop control simulations within TRANSP through the recently developed Expert routine. (Abstract shortened by ProQuest.).

  11. A Density-Based Ramp Metering Model Considering Multilane Context in Urban Expressways

    Directory of Open Access Journals (Sweden)

    Li Tang

    2017-01-01

    Full Text Available As one of the most effective intelligent transportation strategies, ramp metering is regularly discussed and applied all over the world. The classic ramp metering algorithm ALINEA dominates in practical applications due to its advantages in stabilizing traffic flow at a high throughput level. Although ALINEA chooses the traffic occupancy as the optimization parameter, the classic traffic flow variables (density, traffic volume, and travel speed may be easier obtained and understood by operators in practice. This paper presents a density-based ramp metering model for multilane context (MDB-RM on urban expressways. The field data of traffic flow parameters is collected in Chengdu, China. A dynamic density model for multilane condition is developed. An error function represented by multilane dynamic density is introduced to adjust the different usage between lanes. By minimizing the error function, the density of mainstream traffic can stabilize at the set value, while realizing the maximum decrease of on-ramp queues. Also, VISSIM Component Object Model of Application Programming Interface is used for comparison of the MDB-RM model with a noncontrol, ALINEA, and density-based model, respectively. The simulation results indicate that the MDB-RM model is capable of achieving a comprehensive optimal result from both sides of the mainstream and on-ramp.

  12. When Theory Meets Data: Comparing Model Predictions Of Hillslope Sediment Size With Field Measurements.

    Science.gov (United States)

    Mahmoudi, M.; Sklar, L. S.; Leclere, S.; Davis, J. D.; Stine, A.

    2017-12-01

    The size distributions of sediment produced on hillslopes and supplied to river channels influence a wide range of fluvial processes, from bedrock river incision to the creation of aquatic habitats. However, the factors that control hillslope sediment size are poorly understood, limiting our ability to predict sediment size and model the evolution of sediment size distributions across landscapes. Recently separate field and theoretical investigations have begun to address this knowledge gap. Here we compare the predictions of several emerging modeling approaches to landscapes where high quality field data are available. Our goals are to explore the sensitivity and applicability of the theoretical models in each field context, and ultimately to provide a foundation for incorporating hillslope sediment size into models of landscape evolution. The field data include published measurements of hillslope sediment size from the Kohala peninsula on the island of Hawaii and tributaries to the Feather River in the northern Sierra Nevada mountains of California, and an unpublished data set from the Inyo Creek catchment of the southern Sierra Nevada. These data are compared to predictions adapted from recently published modeling approaches that include elements of topography, geology, structure, climate and erosion rate. Predictive models for each site are built in ArcGIS using field condition datasets: DEM topography (slope, aspect, curvature), bedrock geology (lithology, mineralogy), structure (fault location, fracture density), climate data (mean annual precipitation and temperature), and estimates of erosion rates. Preliminary analysis suggests that models may be finely tuned to the calibration sites, particularly when field conditions most closely satisfy model assumptions, leading to unrealistic predictions from extrapolation. We suggest a path forward for developing a computationally tractable method for incorporating spatial variation in production of hillslope

  13. Predictive modeling of nanoscale domain morphology in solution-processed organic thin films

    Science.gov (United States)

    Schaaf, Cyrus; Jenkins, Michael; Morehouse, Robell; Stanfield, Dane; McDowall, Stephen; Johnson, Brad L.; Patrick, David L.

    2017-09-01

    The electronic and optoelectronic properties of molecular semiconductor thin films are directly linked to their extrinsic nanoscale structural characteristics such as domain size and spatial distributions. In films prepared by common solution-phase deposition techniques such as spin casting and solvent-based printing, morphology is governed by a complex interrelated set of thermodynamic and kinetic factors that classical models fail to adequately capture, leaving them unable to provide much insight, let alone predictive design guidance for tailoring films with specific nanostructural characteristics. Here we introduce a comprehensive treatment of solution-based film formation enabling quantitative prediction of domain formation rates, coverage, and spacing statistics based on a small number of experimentally measureable parameters. The model combines a mean-field rate equation treatment of monomer aggregation kinetics with classical nucleation theory and a supersaturation-dependent critical nucleus size to solve for the quasi-two-dimensional temporally and spatially varying monomer concentration, nucleation rate, and other properties. Excellent agreement is observed with measured nucleation densities and interdomain radial distribution functions in polycrystalline tetracene films. Numerical solutions lead to a set of general design rules enabling predictive morphological control in solution-processed molecular crystalline films.

  14. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  15. Heat Transfer Characteristics and Prediction Model of Supercritical Carbon Dioxide (SC-CO2 in a Vertical Tube

    Directory of Open Access Journals (Sweden)

    Can Cai

    2017-11-01

    Full Text Available Due to its distinct capability to improve the efficiency of shale gas production, supercritical carbon dioxide (SC-CO2 fracturing has attracted increased attention in recent years. Heat transfer occurs in the transportation and fracture processes. To better predict and understand the heat transfer of SC-CO2 near the critical region, numerical simulations focusing on a vertical flow pipe were performed. Various turbulence models and turbulent Prandtl numbers (Prt were evaluated to capture the heat transfer deterioration (HTD. The simulations show that the turbulent Prandtl number model (TWL model combined with the Shear Stress Transport (SST k-ω turbulence model accurately predicts the HTD in the critical region. It was found that Prt has a strong effect on the heat transfer prediction. The HTD occurred under larger heat flux density conditions, and an acceleration process was observed. Gravity also affects the HTD through the linkage of buoyancy, and HTD did not occur under zero-gravity conditions.

  16. Application of GIS based data driven evidential belief function model to predict groundwater potential zonation

    Science.gov (United States)

    Nampak, Haleh; Pradhan, Biswajeet; Manap, Mohammad Abd

    2014-05-01

    The objective of this paper is to exploit potential application of an evidential belief function (EBF) model for spatial prediction of groundwater productivity at Langat basin area, Malaysia using geographic information system (GIS) technique. About 125 groundwater yield data were collected from well locations. Subsequently, the groundwater yield was divided into high (⩾11 m3/h) and low yields (divided into a testing dataset 70% (42 wells) for training the model and the remaining 30% (18 wells) was used for validation purpose. To perform cross validation, the frequency ratio (FR) approach was applied into remaining groundwater wells with low yield to show the spatial correlation between the low potential zones of groundwater productivity. A total of twelve groundwater conditioning factors that affect the storage of groundwater occurrences were derived from various data sources such as satellite based imagery, topographic maps and associated database. Those twelve groundwater conditioning factors are elevation, slope, curvature, stream power index (SPI), topographic wetness index (TWI), drainage density, lithology, lineament density, land use, normalized difference vegetation index (NDVI), soil and rainfall. Subsequently, the Dempster-Shafer theory of evidence model was applied to prepare the groundwater potential map. Finally, the result of groundwater potential map derived from belief map was validated using testing data. Furthermore, to compare the performance of the EBF result, logistic regression model was applied. The success-rate and prediction-rate curves were computed to estimate the efficiency of the employed EBF model compared to LR method. The validation results demonstrated that the success-rate for EBF and LR methods were 83% and 82% respectively. The area under the curve for prediction-rate of EBF and LR methods were calculated 78% and 72% respectively. The outputs achieved from the current research proved the efficiency of EBF in groundwater

  17. Enhanced Single Seed Trait Predictions in Soybean (Glycine max) and Robust Calibration Model Transfer with Near-Infrared Reflectance Spectroscopy.

    Science.gov (United States)

    Hacisalihoglu, Gokhan; Gustin, Jeffery L; Louisma, Jean; Armstrong, Paul; Peter, Gary F; Walker, Alejandro R; Settles, A Mark

    2016-02-10

    Single seed near-infrared reflectance (NIR) spectroscopy predicts soybean (Glycine max) seed quality traits of moisture, oil, and protein. We tested the accuracy of transferring calibrations between different single seed NIR analyzers of the same design by collecting NIR spectra and analytical trait data for globally diverse soybean germplasm. X-ray microcomputed tomography (μCT) was used to collect seed density and shape traits to enhance the number of soybean traits that can be predicted from single seed NIR. Partial least-squares (PLS) regression gave accurate predictive models for oil, weight, volume, protein, and maximal cross-sectional area of the seed. PLS models for width, length, and density were not predictive. Although principal component analysis (PCA) of the NIR spectra showed that black seed coat color had significant signal, excluding black seeds from the calibrations did not impact model accuracies. Calibrations for oil and protein developed in this study as well as earlier calibrations for a separate NIR analyzer of the same design were used to test the ability to transfer PLS regressions between platforms. PLS models built from data collected on one NIR analyzer had minimal differences in accuracy when applied to spectra collected from a sister device. Model transfer was more robust when spectra were trimmed from 910 to 1679 nm to 955-1635 nm due to divergence of edge wavelengths between the two devices. The ability to transfer calibrations between similar single seed NIR spectrometers facilitates broader adoption of this high-throughput, nondestructive, seed phenotyping technology.

  18. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  19. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  20. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  1. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  2. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  3. Cylinders out of a top hat: counts-in-cells for projected densities

    Science.gov (United States)

    Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon

    2018-06-01

    Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.

  4. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    Science.gov (United States)

    Azevedo Peixoto, Leonardo de; Laviola, Bruno Galvêas; Alves, Alexandre Alonso; Rosado, Tatiana Barbosa; Bhering, Leonardo Lopes

    2017-01-01

    Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY) and the weight of 100 seeds (W100S) using restricted maximum likelihood (REML); to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  5. Breeding Jatropha curcas by genomic selection: A pilot assessment of the accuracy of predictive models.

    Directory of Open Access Journals (Sweden)

    Leonardo de Azevedo Peixoto

    Full Text Available Genomic wide selection is a promising approach for improving the selection accuracy in plant breeding, particularly in species with long life cycles, such as Jatropha. Therefore, the objectives of this study were to estimate the genetic parameters for grain yield (GY and the weight of 100 seeds (W100S using restricted maximum likelihood (REML; to compare the performance of GWS methods to predict GY and W100S; and to estimate how many markers are needed to train the GWS model to obtain the maximum accuracy. Eight GWS models were compared in terms of predictive ability. The impact that the marker density had on the predictive ability was investigated using a varying number of markers, from 2 to 1,248. Because the genetic variance between evaluated genotypes was significant, it was possible to obtain selection gain. All of the GWS methods tested in this study can be used to predict GY and W100S in Jatropha. A training model fitted using 1,000 and 800 markers is sufficient to capture the maximum genetic variance and, consequently, maximum prediction ability of GY and W100S, respectively. This study demonstrated the applicability of genome-wide prediction to identify useful genetic sources of GY and W100S for Jatropha breeding. Further research is needed to confirm the applicability of the proposed approach to other complex traits.

  6. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  7. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  8. Global asymptotic stability of density dependent integral population projection models.

    Science.gov (United States)

    Rebarber, Richard; Tenhumberg, Brigitte; Townley, Stuart

    2012-02-01

    Many stage-structured density dependent populations with a continuum of stages can be naturally modeled using nonlinear integral projection models. In this paper, we study a trichotomy of global stability result for a class of density dependent systems which include a Platte thistle model. Specifically, we identify those systems parameters for which zero is globally asymptotically stable, parameters for which there is a positive asymptotically stable equilibrium, and parameters for which there is no asymptotically stable equilibrium. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Potential stream density in Mid-Atlantic US watersheds.

    Science.gov (United States)

    Elmore, Andrew J; Julian, Jason P; Guinn, Steven M; Fitzpatrick, Matthew C

    2013-01-01

    Stream network density exerts a strong influence on ecohydrologic processes in watersheds, yet existing stream maps fail to capture most headwater streams and therefore underestimate stream density. Furthermore, discrepancies between mapped and actual stream length vary between watersheds, confounding efforts to understand the impacts of land use on stream ecosystems. Here we report on research that predicts stream presence from coupled field observations of headwater stream channels and terrain variables that were calculated both locally and as an average across the watershed upstream of any location on the landscape. Our approach used maximum entropy modeling (MaxEnt), a robust method commonly implemented to model species distributions that requires information only on the presence of the entity of interest. In validation, the method correctly predicts the presence of 86% of all 10-m stream segments and errors are low (stream density and compare our results with the National Hydrography Dataset (NHD). We find that NHD underestimates stream density by up to 250%, with errors being greatest in the densely urbanized cities of Washington, DC and Baltimore, MD and in regions where the NHD has never been updated from its original, coarse-grain mapping. This work is the most ambitious attempt yet to map stream networks over a large region and will have lasting implications for modeling and conservation efforts.

  10. A model to predict evaporation rates in habitats used by container-dwelling mosquitoes.

    Science.gov (United States)

    Bartlett-Healy, Kristen; Healy, Sean P; Hamilton, George C

    2011-05-01

    Container-dwelling mosquitoes use a wide variety of container habitats. The bottle cap is often cited as the smallest container habitat used by container species. When containers are small, the habitat conditions can greatly affect evaporation rates that in turn can affect the species dynamics within the container. An evaporation rate model was adapted to predict evaporation rates in mosquito container habitats. In both the laboratory and field, our model was able to predict actual evaporation rates. Examples of how the model may be applied are provided by examining the likelihood of Aedes albopictus (Skuse), Aedes aegypti (L.), and Culex pipiens pipiens (L.) completing their development within small-volume containers under typical environmental conditions and a range of temperatures. Our model suggests that under minimal direct sunlight exposure, both Ae. aegypti and Ae. albopictus could develop within a bottle cap before complete evaporation. Our model shows that under the environmental conditions when a plastic field container was sampled, neither Ae. albopictus or Cx. p. pipiens could complete development in that particular container before the water evaporated. Although rainfall could replenish the habitat, the effects of evaporation would increase larval density, which could in turn further decrease developmental rates.

  11. Recent experimental results on level densities for compound reaction calculations

    International Nuclear Information System (INIS)

    Voinov, A.V.

    2012-01-01

    There is a problem related to the choice of the level density input for Hauser-Feshbach model calculations. Modern computer codes have several options to choose from but it is not clear which of them has to be used in some particular cases. Availability of many options helps to describe existing experimental data but it creates problems when it comes to predictions. Traditionally, different level density systematics are based on experimental data from neutron resonance spacing which are available for a limited spin interval and one parity only. On the other hand reaction cross section calculations use the total level density. This can create large uncertainties when converting the neutron resonance spacing to the total level density that results in sizable uncertainties in cross section calculations. It is clear now that total level densities need to be studied experimentally in a systematic manner. Such information can be obtained only from spectra of compound nuclear reactions. The question is does level densities obtained from compound nuclear reactions keep the same regularities as level densities obtained from neutron resonances- Are they consistent- We measured level densities of 59-64 Ni isotopes from proton evaporation spectra of 6,7 Li induced reactions. Experimental data are presented. Conclusions of how level density depends on the neutron number and on the degree of proximity to the closed shell ( 56 Ni) are drawn. The level density parameters have been compared with parameters obtained from the analysis of neutron resonances and from model predictions

  12. Ability of matrix models to explain the past and predict the future of plant populations.

    Science.gov (United States)

    McEachern, Kathryn; Crone, Elizabeth E.; Ellis, Martha M.; Morris, William F.; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlen, Johan; Kaye, Thomas N.; Knight, Tiffany M.; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F.; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer I.; Doak, Daniel F.; Ganesan, Rengaian; Thorpe, Andrea S.; Menges, Eric S.

    2013-01-01

    Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models.

  13. Ability of matrix models to explain the past and predict the future of plant populations.

    Science.gov (United States)

    Crone, Elizabeth E; Ellis, Martha M; Morris, William F; Stanley, Amanda; Bell, Timothy; Bierzychudek, Paulette; Ehrlén, Johan; Kaye, Thomas N; Knight, Tiffany M; Lesica, Peter; Oostermeijer, Gerard; Quintana-Ascencio, Pedro F; Ticktin, Tamara; Valverde, Teresa; Williams, Jennifer L; Doak, Daniel F; Ganesan, Rengaian; McEachern, Kathyrn; Thorpe, Andrea S; Menges, Eric S

    2013-10-01

    Uncertainty associated with ecological forecasts has long been recognized, but forecast accuracy is rarely quantified. We evaluated how well data on 82 populations of 20 species of plants spanning 3 continents explained and predicted plant population dynamics. We parameterized stage-based matrix models with demographic data from individually marked plants and determined how well these models forecast population sizes observed at least 5 years into the future. Simple demographic models forecasted population dynamics poorly; only 40% of observed population sizes fell within our forecasts' 95% confidence limits. However, these models explained population dynamics during the years in which data were collected; observed changes in population size during the data-collection period were strongly positively correlated with population growth rate. Thus, these models are at least a sound way to quantify population status. Poor forecasts were not associated with the number of individual plants or years of data. We tested whether vital rates were density dependent and found both positive and negative density dependence. However, density dependence was not associated with forecast error. Forecast error was significantly associated with environmental differences between the data collection and forecast periods. To forecast population fates, more detailed models, such as those that project how environments are likely to change and how these changes will affect population dynamics, may be needed. Such detailed models are not always feasible. Thus, it may be wiser to make risk-averse decisions than to expect precise forecasts from models. © 2013 Society for Conservation Biology.

  14. Coherent density fluctuation model as a local-scale limit to ATDHF

    International Nuclear Information System (INIS)

    Antonov, A.N.; Petkov, I.Zh.; Stoitsov, M.V.

    1985-04-01

    The local scale transformation method is used for the construction of an Adiabatic Time-Dependent Hartree-Fock approach in terms of the local density distribution. The coherent density fluctuation relations of the model result in a particular case when the ''flucton'' local density is connected with the plane wave determinant model function be means of the local-scale coordinate transformation. The collective potential energy expression is obtained and its relation to the nuclear matter energy saturation curve is revealed. (author)

  15. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  16. A quasi-stationary numerical model of atomized metal droplets, II: Prediction and assessment

    DEFF Research Database (Denmark)

    Pryds, Nini H.; Hattel, Jesper Henri; Thorborg, Jesper

    1999-01-01

    been illustrated.A comparison between the numerical model and the experimental results shows an excellent agreement and demonstrates the validity of the present model, e.g. the calculated gas temperature which has an important influence on the droplet solidification behaviour as well as the calculated......A new model which extends previous studies and includes the interaction between enveloping gas and an array of droplets has been developed and presented in a previous paper. The model incorporates the probability density function of atomized metallic droplets into the heat transfer equations....... The main thrust of the model is that the gas temperature was not predetermined and calculated empirically but calculated numerically based on heat balance consideration. In this paper, the accuracy of the numerical model and the applicability of the model as a predictive tool have been investigated...

  17. Behaviors of impurity in ITER and DEMOs using BALDUR integrated predictive modeling code

    International Nuclear Information System (INIS)

    Onjun, Thawatchai; Buangam, Wannapa; Wisitsorasak, Apiwat

    2015-01-01

    The behaviors of impurity are investigated using self-consistent modeling of 1.5D BALDUR integrated predictive modeling code, in which theory-based models are used for both core and edge region. In these simulations, a combination of NCLASS neoclassical transport and Multi-mode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model. This pedestal temperature model is based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The time evolution of plasma current, temperature and density profiles is carried out for ITER and DEMOs plasmas. As a result, the impurity behaviors such as impurity accumulation and impurity transport can be investigated. (author)

  18. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. The contributions of breast density and common genetic variation to breast cancer risk.

    Science.gov (United States)

    Vachon, Celine M; Pankratz, V Shane; Scott, Christopher G; Haeberle, Lothar; Ziv, Elad; Jensen, Matthew R; Brandt, Kathleen R; Whaley, Dana H; Olson, Janet E; Heusinger, Katharina; Hack, Carolin C; Jud, Sebastian M; Beckmann, Matthias W; Schulz-Wendtland, Ruediger; Tice, Jeffrey A; Norman, Aaron D; Cunningham, Julie M; Purrington, Kristen S; Easton, Douglas F; Sellers, Thomas A; Kerlikowske, Karla; Fasching, Peter A; Couch, Fergus J

    2015-05-01

    We evaluated whether a 76-locus polygenic risk score (PRS) and Breast Imaging Reporting and Data System (BI-RADS) breast density were independent risk factors within three studies (1643 case patients, 2397 control patients) using logistic regression models. We incorporated the PRS odds ratio (OR) into the Breast Cancer Surveillance Consortium (BCSC) risk-prediction model while accounting for its attributable risk and compared five-year absolute risk predictions between models using area under the curve (AUC) statistics. All statistical tests were two-sided. BI-RADS density and PRS were independent risk factors across all three studies (P interaction = .23). Relative to those with scattered fibroglandular densities and average PRS (2(nd) quartile), women with extreme density and highest quartile PRS had 2.7-fold (95% confidence interval [CI] = 1.74 to 4.12) increased risk, while those with low density and PRS had reduced risk (OR = 0.30, 95% CI = 0.18 to 0.51). PRS added independent information (P Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  1. A dry-spot model for the prediction of critical heat flux in water boiling in bubbly flow regime

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    This paper presents a prediction of critical heat flux (CHF) in bubbly flow regime using dry-spot model proposed recently by authors for pool and flow boiling CHF and existing correlations for forced convective heat transfer coefficient, active site density and bubble departure diameter in nucleate boiling region. Without any empirical constants always present in earlier models, comparisons of the model predictions with experimental data for upward flow of water in vertical, uniformly-heated round tubes are performed and show a good agreement. The parametric trends of CHF have been explored with respect to variations in pressure, tube diameter and length, mass flux and inlet subcooling. 16 refs., 6 figs., 1 tab. (Author)

  2. Counterintuitive electron localisation from density-functional theory with polarisable solvent models

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Stephen G., E-mail: sdale@ucmerced.edu [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Johnson, Erin R., E-mail: erin.johnson@dal.ca [Department of Chemistry, Dalhousie University, 6274 Coburg Road, Halifax, Nova Scotia B3H 4R2 (Canada)

    2015-11-14

    Exploration of the solvated electron phenomena using density-functional theory (DFT) generally results in prediction of a localised electron within an induced solvent cavity. However, it is well known that DFT favours highly delocalised charges, rendering the localisation of a solvated electron unexpected. We explore the origins of this counterintuitive behaviour using a model Kevan-structure system. When a polarisable-continuum solvent model is included, it forces electron localisation by introducing a strong energetic bias that favours integer charges. This results in the formation of a large energetic barrier for charge-hopping and can cause the self-consistent field to become trapped in local minima thus converging to stable solutions that are higher in energy than the ground electronic state. Finally, since the bias towards integer charges is caused by the polarisable continuum, these findings will also apply to other classical polarisation corrections, as in combined quantum mechanics and molecular mechanics (QM/MM) methods. The implications for systems beyond the solvated electron, including cationic DNA bases, are discussed.

  3. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  4. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  5. Assessing Predictive Properties of Genome-Wide Selection in Soybeans

    Directory of Open Access Journals (Sweden)

    Alencar Xavier

    2016-08-01

    Full Text Available Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr. We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set.

  6. Assessing Predictive Properties of Genome-Wide Selection in Soybeans.

    Science.gov (United States)

    Xavier, Alencar; Muir, William M; Rainey, Katy Martin

    2016-08-09

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. Copyright © 2016 Xavie et al.

  7. Deep-Learning-Based Approach for Prediction of Algal Blooms

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    2016-10-01

    Full Text Available Algal blooms have recently become a critical global environmental concern which might put economic development and sustainability at risk. However, the accurate prediction of algal blooms remains a challenging scientific problem. In this study, a novel prediction approach for algal blooms based on deep learning is presented—a powerful tool to represent and predict highly dynamic and complex phenomena. The proposed approach constructs a five-layered model to extract detailed relationships between the density of phytoplankton cells and various environmental parameters. The algal blooms can be predicted by the phytoplankton density obtained from the output layer. A case study is conducted in coastal waters of East China using both our model and a traditional back-propagation neural network for comparison. The results show that the deep-learning-based model yields better generalization and greater accuracy in predicting algal blooms than a traditional shallow neural network does.

  8. Modelling the effect of autotoxicity on density-dependent phytotoxicity.

    Science.gov (United States)

    Sinkkonen, A

    2007-01-21

    An established method to separate resource competition from chemical interference is cultivation of monospecific, even-aged stands. The stands grow at several densities and they are exposed to homogenously spread toxins. Hence, the dose received by individual plants is inversely related to stand density. This results in distinguishable alterations in dose-response slopes. The method is often recommended in ecological studies of allelopathy. However, many plant species are known to release autotoxic compounds. Often, the probability of autotoxicity increases as sowing density increases. Despite this, the possibility of autotoxicity is ignored when experiments including monospecific stands are designed and when their results are evaluated. In this paper, I model mathematically how autotoxicity changes the outcome of dose-response slopes as different densities of monospecific stands are grown on homogenously phytotoxic substrata. Several ecologically reasonable relations between plant density and autotoxin exposure are considered over a range of parameter values, and similarities between different relations are searched for. The models indicate that autotoxicity affects the outcome of density-dependent dose-response experiments. Autotoxicity seems to abolish the effects of other phytochemicals in certain cases, while it may augment them in other cases. Autotoxicity may alter the outcome of tests using the method of monospecific stands even if the dose of autotoxic compounds per plant is a fraction of the dose of non-autotoxic phytochemicals with similar allelopathic potential. Data from the literature support these conclusions. A faulty null hypothesis may be accepted if the autotoxic potential of a test species is overlooked in density-response experiments. On the contrary, if test species are known to be non-autotoxic, the method of monospecific stands does not need fine-tuning. The results also suggest that the possibility of autotoxicity should be investigated in

  9. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  10. Density limit studies on DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    Maingi, R. [Oak Ridge National Lab., TN (United States); Mahdavi, M.A.; Petrie, T.W. [General Atomics, San Diego, CA (United States)] [and others

    1998-08-01

    The authors have studied the processes limiting plasma density and successfully achieved discharges with density {approximately}50% above the empirical Greenwald density limit with H-mode confinement. This was accomplished by density profile control, enabled through pellet injection and divertor pumping. By examining carefully the criterion for MARFE formation, the authors have derived an edge density limit with scaling very similar to Greenwald scaling. Finally, they have looked in detail at the first and most common density limit process in DIII-D, total divertor detachment, and found that the local upstream separatrix density (n{sub e}{sup sep,det}) at detachment onset (partial detachment) increases with the scrape-off layer heating power, P{sub heat}, i.e., n{sub e}{sup sep,det} {approximately} P{sub heat}{sup 0.76}. This is in marked contrast to the line-average density at detachment which is insensitive to the heating power. The data are in reasonable agreement with the Borass model, which predicted that the upstream density at detachment would increase as P{sub heat}{sup 0.7}.

  11. Rapid model building of beta-sheets in electron-density maps.

    Science.gov (United States)

    Terwilliger, Thomas C

    2010-03-01

    A method for rapidly building beta-sheets into electron-density maps is presented. beta-Strands are identified as tubes of high density adjacent to and nearly parallel to other tubes of density. The alignment and direction of each strand are identified from the pattern of high density corresponding to carbonyl and C(beta) atoms along the strand averaged over all repeats present in the strand. The beta-strands obtained are then assembled into a single atomic model of the beta-sheet regions. The method was tested on a set of 42 experimental electron-density maps at resolutions ranging from 1.5 to 3.8 A. The beta-sheet regions were nearly completely built in all but two cases, the exceptions being one structure at 2.5 A resolution in which a third of the residues in beta-sheets were built and a structure at 3.8 A in which under 10% were built. The overall average r.m.s.d. of main-chain atoms in the residues built using this method compared with refined models of the structures was 1.5 A.

  12. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  13. Density functional theory prediction of pKa for carboxylated single-wall carbon nanotubes and graphene

    Science.gov (United States)

    Li, Hao; Fu, Aiping; Xue, Xuyan; Guo, Fengna; Huai, Wenbo; Chu, Tianshu; Wang, Zonghua

    2017-06-01

    Density functional calculations have been performed to investigate the acidities for the carboxylated single-wall carbon nanotubes and graphene. The pKa values for different COOH-functionalized models with varying lengths, diameters and chirality of nanotubes and with different edges of graphene were predicted using the SMD/M05-2X/6-31G* method combined with two universal thermodynamic cycles. The effects of following factors, such as, the functionalized position of carboxyl group, the Stone-Wales and single vacancy defects, on the acidity of the functionalized nanotube and graphene have also been evaluated. The deprotonated species have undergone decarboxylation when the hybridization mode of the carbon atom at the functionalization site changed from sp2 to sp3 both for the tube and graphene. The knowledge of the pKa values of the carboxylated nanotube and graphene could be of great help for the understanding of the nanocarbon materials in many diverse areas, including environmental protection, catalysis, electrochemistry and biochemistry.

  14. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  15. Gamma densitometer for measuring Pu density in fuel tubes

    International Nuclear Information System (INIS)

    Winn, W.G.

    1982-01-01

    A fuel-gamma-densitometer (FGD) has been developed to examine nondestructively the uniformity of plutonium in aluminum-clad fuel tubes at the Savannah River Plant (SRP). The monitoring technique is γ-ray spectroscopy with a lead-collimated Ge(Li) detector. Plutonium density is correlated with the measured intensity of the 208 keV γ-ray from 237 U (7d) of the 241 Pu (15y) decay chain. The FGD measures the plutonium density within 0.125- or 0.25-inch-diameter areas of the 0.133- to 0.183-inch-thick tube walls. Each measurement yields a density ratio that relates the plutonium density of the measured area to the plutonium density in normal regions of the tube. The technique was used to appraise a series of fuel tubes to be irradated in an SRP reactor. High-density plutonium areas were initially identified by x-ray methods and then examined quantitatively with the FGD. The FGD reliably tested fuel tubes and yielded density ratios over a range of 0.0 to 2.5. FGD measurements examined (1) nonuniform plutonium densities or hot spots, (2) uniform high-density patches, and (3) plutonium density distribution in thin cladding regions. Measurements for tubes with known plutonium density agreed with predictions to within 2%. Attenuation measurements of the 208-keV γ-ray passage through the tube walls agreed to within 2 to 3% of calculated predictions. Collimator leakage measurements agreed with model calculations that predicted less than a 1.5% effect on plutonium density ratios. Finally, FGD measurements correlated well with x-ray transmission and fluoroscopic measurements. The data analysis for density ratios involved a small correction of about 10% for γ-shielding within the fuel tube. For hot spot examinations, limited information for this correction dictated a density ratio uncertainty of 3 to 5%

  16. Density-correlation functions in Calogero-Sutherland models

    International Nuclear Information System (INIS)

    Minahan, J.A.; Polychronakos, A.P.

    1994-01-01

    Using arguments from two-dimensional Yang-Mills theory and the collective coordinate formulation of the Calogero-Sutherland model, we conjecture the dynamical density-correlation function for coupling l and 1/l, where l is an integer. We present overwhelming evidence that the conjecture is indeed correct

  17. Density correlation functions in Calogero-Sutherland models

    CERN Document Server

    Minahan, Joseph A.; Joseph A Minahan; Alexios P Polychronakos

    1994-01-01

    Using arguments from two dimensional Yang-Mills theory and the collective coordinate formulation of the Calogero-Sutherland model, we conjecture the dynamical density correlation function for coupling l and 1/l, where l is an integer. We present overwhelming evidence that the conjecture is indeed correct.

  18. Structure-Dependent Water-Induced Linear Reduction Model for Predicting Gas Diffusivity and Tortuosity in Repacked and Intact Soil

    DEFF Research Database (Denmark)

    Møldrup, Per; Chamindu, T. K. K. Deepagoda; Hamamoto, S.

    2013-01-01

    The soil-gas diffusion is a primary driver of transport, reactions, emissions, and uptake of vadose zone gases, including oxygen, greenhouse gases, fumigants, and spilled volatile organics. The soil-gas diffusion coefficient, Dp, depends not only on soil moisture content, texture, and compaction...... but also on the local-scale variability of these. Different predictive models have been developed to estimate Dp in intact and repacked soil, but clear guidelines for model choice at a given soil state are lacking. In this study, the water-induced linear reduction (WLR) model for repacked soil is made...... air) in repacked soils containing between 0 and 54% clay. With Cm = 2.1, the SWLR model on average gave excellent predictions for 290 intact soils, performing well across soil depths, textures, and compactions (dry bulk densities). The SWLR model generally outperformed similar, simple Dp/Do models...

  19. The melt/shrink effect of low density thermoplastics insulates: Cone calorimeter tests

    Directory of Open Access Journals (Sweden)

    Xu Qiang

    2017-01-01

    Full Text Available The melt/shrink effects on the fire behavior of low density thermoplastic foam have been studied in a cone calorimeter. The experiments have been performed with four samples of expanded polystyrene foams having different thicknesses and two extruded polystyrene foams. Decrease in surface area and increase in density, characterizing the melt/shrink effect have been measured at different incident heat fluxes. Three of these foams tested have been also examined by burning tests at an incident heat flux of 50 kW/m2. It was assessed that the fire behavior predictions based the current literature models provided incorrect results if the cone test results were applied directly. However, the correct models provided adequate results when the initial burning area and the density of the molten foam were used to correct the initial cone calorimeter data. This communication refers to the fact that both the effective burning area and the density of the molten foam affect the cone calorimeter data, which requires consequent corrections to attain adequate predictions of models about the materials fire behavior.

  20. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  1. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  2. Stochastic transport models for mixing in variable-density turbulence

    Science.gov (United States)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  3. Impact of particle density and initial volume on mathematical compression models

    DEFF Research Database (Denmark)

    Sonnergaard, Jørn

    2000-01-01

    In the calculation of the coefficients of compression models for powders either the initial volume or the particle density is introduced as a normalising factor. The influence of these normalising factors is, however, widely different on coefficients derived from the Kawakita, Walker and Heckel...... equations. The problems are illustrated by investigations on compaction profiles of 17 materials with different molecular structures and particle densities. It is shown that the particle density of materials with covalent bonds in the Heckel model acts as a key parameter with a dominating influence...

  4. Developing Models to Predict the Number of Fire Hotspots from an Accumulated Fuel Dryness Index by Vegetation Type and Region in Mexico

    Directory of Open Access Journals (Sweden)

    D. J. Vega-Nieva

    2018-04-01

    Full Text Available Understanding the linkage between accumulated fuel dryness and temporal fire occurrence risk is key for improving decision-making in forest fire management, especially under growing conditions of vegetation stress associated with climate change. This study addresses the development of models to predict the number of 10-day observed Moderate-Resolution Imaging Spectroradiometer (MODIS active fire hotspots—expressed as a Fire Hotspot Density index (FHD—from an Accumulated Fuel Dryness Index (AcFDI, for 17 main vegetation types and regions in Mexico, for the period 2011–2015. The AcFDI was calculated by applying vegetation-specific thresholds for fire occurrence to a satellite-based fuel dryness index (FDI, which was developed after the structure of the Fire Potential Index (FPI. Linear and non-linear models were tested for the prediction of FHD from FDI and AcFDI. Non-linear quantile regression models gave the best results for predicting FHD using AcFDI, together with auto-regression from previously observed hotspot density values. The predictions of 10-day observed FHD values were reasonably good with R2 values of 0.5 to 0.7 suggesting the potential to be used as an operational tool for predicting the expected number of fire hotspots by vegetation type and region in Mexico. The presented modeling strategy could be replicated for any fire danger index in any region, based on information from MODIS or other remote sensors.

  5. Recent progress in predicting structural and electronic properties of organic solids with the van der Waals density functional

    Energy Technology Data Exchange (ETDEWEB)

    Yanagisawa, Susumu, E-mail: shou@sci.u-ryukyu.ac.jp [Department of Physics and Earth Sciences, Faculty of Science, University of the Ryukyus, 1 Senbaru, Nishihara, Okinawa 903-0213 (Japan); Okuma, Koji; Inaoka, Takeshi [Department of Physics and Earth Sciences, Faculty of Science, University of the Ryukyus, 1 Senbaru, Nishihara, Okinawa 903-0213 (Japan); Hamada, Ikutaro, E-mail: Hamada.Ikutaro@nims.go.jp [International Center for Materials Nanoarchitectonics (MANA), National Institute for Materials Science (NIMS), Tsukuba 305-0044 (Japan)

    2015-10-01

    Highlights: • Review of theoretical studies on organic solids with the density-functional methods. • van der Waals (vdW)-inclusive methods to predict cohesive properties of oligoacenes. • A variant of the vdW density functional describes the structures accurately. • The molecular configuration and conformation crucially affects the band dispersion. - Abstract: We review recent studies on electronic properties of the organic solids with the first-principles electronic structure methods, with the emphasis on the roles of the intermolecular van der Waals (vdW) interaction in electronic properties of the organic semiconductors. After a brief summary of the recent vdW inclusive first-principle theoretical methods, we discuss their performance in predicting cohesive properties of oligoacene crystals as examples of organic crystals. We show that a variant of the van der Waals density functional describes structure and energetics of organic crystals accurately. In addition, we review our recent study on the zinc phthalocyanine crystal and discuss the importance of the intermolecular distance and orientational angle in the band dispersion. Finally, we draw some general conclusions and the future perspectives.

  6. Recent progress in predicting structural and electronic properties of organic solids with the van der Waals density functional

    International Nuclear Information System (INIS)

    Yanagisawa, Susumu; Okuma, Koji; Inaoka, Takeshi; Hamada, Ikutaro

    2015-01-01

    Highlights: • Review of theoretical studies on organic solids with the density-functional methods. • van der Waals (vdW)-inclusive methods to predict cohesive properties of oligoacenes. • A variant of the vdW density functional describes the structures accurately. • The molecular configuration and conformation crucially affects the band dispersion. - Abstract: We review recent studies on electronic properties of the organic solids with the first-principles electronic structure methods, with the emphasis on the roles of the intermolecular van der Waals (vdW) interaction in electronic properties of the organic semiconductors. After a brief summary of the recent vdW inclusive first-principle theoretical methods, we discuss their performance in predicting cohesive properties of oligoacene crystals as examples of organic crystals. We show that a variant of the van der Waals density functional describes structure and energetics of organic crystals accurately. In addition, we review our recent study on the zinc phthalocyanine crystal and discuss the importance of the intermolecular distance and orientational angle in the band dispersion. Finally, we draw some general conclusions and the future perspectives.

  7. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  8. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  9. Numerical prediction of an axisymmetric turbulent mixing layer using two turbulence models

    Science.gov (United States)

    Johnson, Richard W.

    1992-01-01

    Nuclear power, once considered and then rejected (in the U. S.) for application to space vehicle propulsion, is being reconsidered for powering space rockets, especially for interplanetary travel. The gas core reactor, a high risk, high payoff nuclear engine concept, is one that was considered in the 1960s and 70s. As envisioned then, the gas core reactor would consist of a heavy, slow moving core of fissioning uranium vapor surrounded by a fast moving outer stream of hydrogen propellant. Satisfactory operation of such a configuration would require stable nuclear reaction kinetics to occur simultaneously with a stable, coflowing, probably turbulent fluid system having a dense inner stream and a light outer stream. The present study examines the behavior of two turbulence models in numerically simulating an idealized version of the above coflowing fluid system. The two models are the standard k˜ɛ model and a thin shear algebraic stress model (ASM). The idealized flow system can be described as an axisymmetric mixing layer of constant density. Predictions for the radial distribution of the mean streamwise velocity and shear stress for several axial stations are compared with experiment. Results for the k˜ɛe predictions are broadly satisfactory while those for the ASM are distinctly poorer.

  10. Modelling the Effect of Weave Structure and Fabric Thread Density on Mechanical and Comfort Properties of Woven Fabrics

    Directory of Open Access Journals (Sweden)

    Maqsood Muhammad

    2016-09-01

    Full Text Available The paper investigates the effects of weave structure and fabric thread density on the comfort and mechanical properties of various test fabrics woven from polyester/cotton yarns. Three different weave structures, that is, 1/1 plain, 2/1 twill and 3/1 twill, and three different fabric densities were taken as input variables whereas air permeability, overall moisture management capacity, tensile strength and tear strength of fabrics were taken as response variables and a comparison is made of the effect of weave structure and fabric density on the response variables. The results of fabric samples were analysed in Minitab statistical software. The coefficients of determinations (R-sq values of the regression equations show a good predictive ability of the developed statistical models. The findings of the study may be helpful in deciding appropriate manufacturing specifications of woven fabrics to attain specific comfort and mechanical properties.

  11. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  12. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  13. Refitting density dependent relativistic model parameters including Center-of-Mass corrections

    International Nuclear Information System (INIS)

    Avancini, Sidney S.; Marinelli, Jose R.; Carlson, Brett Vern

    2011-01-01

    Full text: Relativistic mean field models have become a standard approach for precise nuclear structure calculations. After the seminal work of Serot and Walecka, which introduced a model Lagrangian density where the nucleons interact through the exchange of scalar and vector mesons, several models were obtained through its generalization, including other meson degrees of freedom, non-linear meson interactions, meson-meson interactions, etc. More recently density dependent coupling constants were incorporated into the Walecka-like models, which are then extensively used. In particular, for these models a connection with the density functional theory can be established. Due to the inherent difficulties presented by field theoretical models, only the mean field approximation is used for the solution of these models. In order to calculate finite nuclei properties in the mean field approximation, a reference set has to be fixed and therefore the translational symmetry is violated. It is well known that in such case spurious effects due to the center-of-mass (COM) motion are present, which are more pronounced for light nuclei. In a previous work we have proposed a technique based on the Pierls-Yoccoz projection operator applied to the mean-field relativistic solution, in order to project out spurious COM contributions. In this work we obtain a new fitting for the density dependent parameters of a density dependent hadronic model, taking into account the COM corrections. Our fitting is obtained taking into account the charge radii and binding energies for He 4 , O 16 , Ca 40 , Ca 48 , Ni 56 , Ni 68 , Sn 100 , Sn 132 and Pb 208 . We show that the nuclear observables calculated using our fit are of a quality comparable to others that can be found in the literature, with the advantage that now a translational invariant many-body wave function is at our disposal. (author)

  14. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  15. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  16. Control-oriented modeling of the plasma particle density in tokamaks and application to real-time density profile reconstruction

    NARCIS (Netherlands)

    Blanken, T.C.; Felici, F.; Rapson, C.J.; de Baar, M.R.; Heemels, W.P.M.H.

    2018-01-01

    A model-based approach to real-time reconstruction of the particle density profile in tokamak plasmas is presented, based on a dynamic state estimator. Traditionally, the density profile is reconstructed in real-time by solving an ill-conditioned inversion problem using a measurement at a single

  17. Predicting surface fuel models and fuel metrics using lidar and CIR imagery in a dense mixed conifer forest

    Science.gov (United States)

    Marek K. Jakubowksi; Qinghua Guo; Brandon Collins; Scott Stephens; Maggi. Kelly

    2013-01-01

    We compared the ability of several classification and regression algorithms to predict forest stand structure metrics and standard surface fuel models. Our study area spans a dense, topographically complex Sierra Nevada mixed-conifer forest. We used clustering, regression trees, and support vector machine algorithms to analyze high density (average 9 pulses/m

  18. Modeling density-driven flow in porous media principles, numerics, software

    CERN Document Server

    Holzbecher, Ekkehard O

    1998-01-01

    Modeling of flow and transport in groundwater has become an important focus of scientific research in recent years. Most contributions to this subject deal with flow situations, where density and viscosity changes in the fluid are neglected. This restriction may not always be justified. The models presented in the book demonstrate immpressingly that the flow pattern may be completely different when density changes are taken into account. The main applications of the models are: thermal and saline convection, geothermal flow, saltwater intrusion, flow through salt formations etc. This book not only presents basic theory, but the reader can also test his knowledge by applying the included software and can set up own models.

  19. Travelling waves of density for a fourth-gradient model of fluids

    Science.gov (United States)

    Gouin, Henri; Saccomandi, Giuseppe

    2016-09-01

    In mean-field theory, the non-local state of fluid molecules can be taken into account using a statistical method. The molecular model combined with a density expansion in Taylor series of the fourth order yields an internal energy value relevant to the fourth-gradient model, and the equation of isothermal motions takes then density's spatial derivatives into account for waves travelling in both liquid and vapour phases. At equilibrium, the equation of the density profile across interfaces is more precise than the Cahn and Hilliard equation, and near the fluid's critical point, the density profile verifies an Extended Fisher-Kolmogorov equation, allowing kinks, which converges towards the Cahn-Hillard equation when approaching the critical point. Nonetheless, we also get pulse waves oscillating and generating critical opalescence.

  20. Modelling stand biomass fractions in Galician Eucalyptus globulus plantations by use of different LiDAR pulse densities

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Ferreiro, E.; Miranda, D.; Barreiro-Fernandez, L.; Bujan, S.; Garcia-Gutierrez, J.; Dieguez-Aranda, U.

    2013-07-01

    Aims of study: To evaluate the potential use of canopy height and intensity distributions, determined by airborne LiDAR, for the estimation of crown, stem and aboveground biomass fractions. To assess the effects of a reduction in LiDAR pulse densities on model precision. Area of study: The study area is located in Galicia, NW Spain. The forests are representative of Eucalyptus globulus stands in NW Spain, characterized by low-intensity silvicultural treatments and by the presence of tall shrub. Material and methods: Linear, multiplicative power and exponential models were used to establish empirical relationships between field measurements and LiDAR metrics. A random selection of LiDAR returns and a comparison of the prediction errors by LiDAR pulse density factor were performed to study a possible loss of fit in these models. Main results: Models showed similar goodness-of-fit statistics to those reported in the international literature. R2 ranged from 0.52 to 0.75 for stand crown biomass, from 0.64 to 0.87 for stand stem biomass, and from 0.63 to 0.86 for stand aboveground biomass. The RMSE/MEAN 100 of the set of fitted models ranged from 17.4% to 28.4%. Models precision was essentially maintained when 87.5% of the original point cloud was reduced, i.e. a reduction from 4 pulses m{sup 2} to 0.5 pulses m{sup 2}. Research highlights: Considering the results of this study, the low-density LiDAR data that are released by the Spanish National Geographic Institute will be an excellent source of information for reducing the cost of forest inventories. (Author)

  1. Absolute densities in exoplanetary systems. Photodynamical modelling of Kepler-138.

    Science.gov (United States)

    Almenara, J. M.; Díaz, R. F.; Dorn, C.; Bonfils, X.; Udry, S.

    2018-04-01

    In favourable conditions, the density of transiting planets in multiple systems can be determined from photometry data alone. Dynamical information can be extracted from light curves, providing modelling is done self-consistently, i.e. using a photodynamical model, which simulates the individual photometric observations instead of the more generally used transit times. We apply this methodology to the Kepler-138 planetary system. The derived planetary bulk densities are a factor of two more precise than previous determinations, and we find a discrepancy in the stellar bulk density with respect to a previous study. This leads, in turn, to a discrepancy in the determination of masses and radii of the star and the planets. In particular, we find that interior planet, Kepler-138 b, has a size in between Mars and the Earth. Given our mass and density estimates, we characterize the planetary interiors using a generalized Bayesian inference model. This model allows us to quantify for interior degeneracy and calculate confidence regions of interior parameters such as thicknesses of the core, the mantle, and ocean and gas layers. We find that Kepler-138 b and Kepler-138 d have significantly thick volatile layers, and that the gas layer of Kepler-138 b is likely enriched. On the other hand, Kepler-138 c can be purely rocky.

  2. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  3. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  4. Energy–density functional plus quasiparticle–phonon model theory as a powerful tool for nuclear structure and astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tsoneva, N., E-mail: Nadia.Tsoneva@theo.physik.uni-giessen.de [Frankfurt Institute for Advanced Studies (FIAS) (Germany); Lenske, H. [Universität Gießen, Institut für Theoretische Physik (Germany)

    2016-11-15

    During the last decade, a theoretical method based on the energy–density functional theory and quasiparticle–phonon model, including up to three-phonon configurations was developed. The main advantages of themethod are that it incorporates a self-consistentmean-field and multi-configuration mixing which are found of crucial importance for systematic investigations of nuclear low-energy excitations, pygmy and giant resonances in an unified way. In particular, the theoretical approach has been proven to be very successful in predictions of new modes of excitations, namely pygmy quadrupole resonance which is also lately experimentally observed. Recently, our microscopically obtained dipole strength functions are implemented in predictions of nucleon-capture reaction rates of astrophysical importance. A comparison to available experimental data is discussed.

  5. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  6. Nuclear interaction potential in a folded-Yukawa model with diffuse densities

    International Nuclear Information System (INIS)

    Randrup, J.

    1975-09-01

    The folded-Yukawa model for the nuclear interaction potential is generalized to diffuse density distributions which are generated by folding a Yukawa function into sharp generating distributions. The effect of a finite density diffuseness or of a finite interaction range is studied. The Proximity Formula corresponding to the generalized model is derived and numerical comparison is made with the exact results. (8 figures)

  7. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  8. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  9. Predictive modeling of pedestal structure in KSTAR using EPED model

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hyunsun; Kim, J. Y. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Kwon, Ohjin [Department of Physics, Daegu University, Gyeongbuk 712-714 (Korea, Republic of)

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  10. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  11. Computational fluid dynamics (CFD) using porous media modeling predicts recurrence after coiling of cerebral aneurysms.

    Science.gov (United States)

    Umeda, Yasuyuki; Ishida, Fujimaro; Tsuji, Masanori; Furukawa, Kazuhiro; Shiba, Masato; Yasuda, Ryuta; Toma, Naoki; Sakaida, Hiroshi; Suzuki, Hidenori

    2017-01-01

    This study aimed to predict recurrence after coil embolization of unruptured cerebral aneurysms with computational fluid dynamics (CFD) using porous media modeling (porous media CFD). A total of 37 unruptured cerebral aneurysms treated with coiling were analyzed using follow-up angiograms, simulated CFD prior to coiling (control CFD), and porous media CFD. Coiled aneurysms were classified into stable or recurrence groups according to follow-up angiogram findings. Morphological parameters, coil packing density, and hemodynamic variables were evaluated for their correlations with aneurysmal recurrence. We also calculated residual flow volumes (RFVs), a novel hemodynamic parameter used to quantify the residual aneurysm volume after simulated coiling, which has a mean fluid domain > 1.0 cm/s. Follow-up angiograms showed 24 aneurysms in the stable group and 13 in the recurrence group. Mann-Whitney U test demonstrated that maximum size, dome volume, neck width, neck area, and coil packing density were significantly different between the two groups (P CFD and larger RFVs in the porous media CFD. Multivariate logistic regression analyses demonstrated that RFV was the only independently significant factor (odds ratio, 1.06; 95% confidence interval, 1.01-1.11; P = 0.016). The study findings suggest that RFV collected under porous media modeling predicts the recurrence of coiled aneurysms.

  12. Potential misuse of avian density as a conservation metric

    Science.gov (United States)

    Skagen, Susan K.; Yackel Adams, Amy A.

    2011-01-01

    Effective conservation metrics are needed to evaluate the success of management in a rapidly changing world. Reproductive rates and densities of breeding birds (as a surrogate for reproductive rate) have been used to indicate the quality of avian breeding habitat, but the underlying assumptions of these metrics rarely have been examined. When birds are attracted to breeding areas in part by the presence of conspecifics and when breeding in groups influences predation rates, the effectiveness of density and reproductive rate as indicators of habitat quality is reduced. It is beneficial to clearly distinguish between individual- and population-level processes when evaluating habitat quality. We use the term reproductive rate to refer to both levels and further distinguish among levels by using the terms per capita fecundity (number of female offspring per female per year, individual level) and population growth rate (the product of density and per capita fecundity, population level). We predicted how density and reproductive rate interact over time under density-independent and density-dependent scenarios, assuming the ideal free distribution model of how birds settle in breeding habitats. We predicted population density of small populations would be correlated positively with both per capita fecundity and population growth rate due to the Allee effect. For populations in the density-dependent growth phase, we predicted no relation between density and per capita fecundity (because individuals in all patches will equilibrate to the same success rate) and a positive relation between density and population growth rate. Several ecological theories collectively suggest that positive correlations between density and per capita fecundity would be difficult to detect. We constructed a decision tree to guide interpretation of positive, neutral, nonlinear, and negative relations between density and reproductive rates at individual and population levels. ?? 2010 Society for

  13. Multi-target QSPR modeling for simultaneous prediction of multiple gas-phase kinetic rate constants of diverse chemicals

    Science.gov (United States)

    Basant, Nikita; Gupta, Shikha

    2018-03-01

    The reactions of molecular ozone (O3), hydroxyl (•OH) and nitrate (NO3) radicals are among the major pathways of removal of volatile organic compounds (VOCs) in the atmospheric environment. The gas-phase kinetic rate constants (kO3, kOH, kNO3) are thus, important in assessing the ultimate fate and exposure risk of atmospheric VOCs. Experimental data for rate constants are not available for many emerging VOCs and the computational methods reported so far address a single target modeling only. In this study, we have developed a multi-target (mt) QSPR model for simultaneous prediction of multiple kinetic rate constants (kO3, kOH, kNO3) of diverse organic chemicals considering an experimental data set of VOCs for which values of all the three rate constants are available. The mt-QSPR model identified and used five descriptors related to the molecular size, degree of saturation and electron density in a molecule, which were mechanistically interpretable. These descriptors successfully predicted three rate constants simultaneously. The model yielded high correlations (R2 = 0.874-0.924) between the experimental and simultaneously predicted endpoint rate constant (kO3, kOH, kNO3) values in test arrays for all the three systems. The model also passed all the stringent statistical validation tests for external predictivity. The proposed multi-target QSPR model can be successfully used for predicting reactivity of new VOCs simultaneously for their exposure risk assessment.

  14. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  15. Irruptive dynamics of introduced caribou on Adak Island, Alaska: an evaluation of Riney-Caughley model predictions

    Science.gov (United States)

    Ricca, Mark A.; Van Vuren, Dirk H.; Weckerly, Floyd W.; Williams, Jeffrey C.; Miles, A. Keith

    2014-01-01

    Large mammalian herbivores introduced to islands without predators are predicted to undergo irruptive population and spatial dynamics, but only a few well-documented case studies support this paradigm. We used the Riney-Caughley model as a framework to test predictions of irruptive population growth and spatial expansion of caribou (Rangifer tarandus granti) introduced to Adak Island in the Aleutian archipelago of Alaska in 1958 and 1959. We utilized a time series of spatially explicit counts conducted on this population intermittently over a 54-year period. Population size increased from 23 released animals to approximately 2900 animals in 2012. Population dynamics were characterized by two distinct periods of irruptive growth separated by a long time period of relative stability, and the catalyst for the initial irruption was more likely related to annual variation in hunting pressure than weather conditions. An unexpected pattern resembling logistic population growth occurred between the peak of the second irruption in 2005 and the next survey conducted seven years later in 2012. Model simulations indicated that an increase in reported harvest alone could not explain the deceleration in population growth, yet high levels of unreported harvest combined with increasing density-dependent feedbacks on fecundity and survival were the most plausible explanation for the observed population trend. No studies of introduced island Rangifer have measured a time series of spatial use to the extent described in this study. Spatial use patterns during the post-calving season strongly supported Riney-Caughley model predictions, whereby high-density core areas expanded outwardly as population size increased. During the calving season, caribou displayed marked site fidelity across the full range of population densities despite availability of other suitable habitats for calving. Finally, dispersal and reproduction on neighboring Kagalaska Island represented a new dispersal front

  16. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    Science.gov (United States)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United

  17. CARMA SURVEY TOWARD INFRARED-BRIGHT NEARBY GALAXIES (STING). III. THE DEPENDENCE OF ATOMIC AND MOLECULAR GAS SURFACE DENSITIES ON GALAXY PROPERTIES

    International Nuclear Information System (INIS)

    Wong, Tony; Xue, Rui; Bolatto, Alberto D.; Fisher, David B.; Vogel, Stuart N.; Leroy, Adam K.; Blitz, Leo; Rosolowsky, Erik; Bigiel, Frank; Ott, Jürgen; Rahman, Nurur; Walter, Fabian

    2013-01-01

    We investigate the correlation between CO and H I emission in 18 nearby galaxies from the CARMA Survey Toward IR-Bright Nearby Galaxies (STING) at sub-kpc and kpc scales. Our sample, spanning a wide range in stellar mass and metallicity, reveals evidence for a metallicity dependence of the H I column density measured in regions exhibiting CO emission. Such a dependence is predicted by the equilibrium model of McKee and Krumholz, which balances H 2 formation and dissociation. The observed H I column density is often smaller than predicted by the model, an effect we attribute to unresolved clumping, although values close to the model prediction are also seen. We do not observe H I column densities much larger than predicted, as might be expected were there a diffuse H I component that did not contribute to H 2 shielding. We also find that the H 2 column density inferred from CO correlates strongly with the stellar surface density, suggesting that the local supply of molecular gas is tightly regulated by the stellar disk

  18. Predicting local dengue transmission in Guangzhou, China, through the influence of imported cases, mosquito density and climate variability.

    Directory of Open Access Journals (Sweden)

    Shaowei Sang

    Full Text Available Each year there are approximately 390 million dengue infections worldwide. Weather variables have a significant impact on the transmission of Dengue Fever (DF, a mosquito borne viral disease. DF in mainland China is characterized as an imported disease. Hence it is necessary to explore the roles of imported cases, mosquito density and climate variability in dengue transmission in China. The study was to identify the relationship between dengue occurrence and possible risk factors and to develop a predicting model for dengue's control and prevention purpose.Three traditional suburbs and one district with an international airport in Guangzhou city were selected as the study areas. Autocorrelation and cross-correlation analysis were used to perform univariate analysis to identify possible risk factors, with relevant lagged effects, associated with local dengue cases. Principal component analysis (PCA was applied to extract principal components and PCA score was used to represent the original variables to reduce multi-collinearity. Combining the univariate analysis and prior knowledge, time-series Poisson regression analysis was conducted to quantify the relationship between weather variables, Breteau Index, imported DF cases and the local dengue transmission in Guangzhou, China. The goodness-of-fit of the constructed model was determined by pseudo-R2, Akaike information criterion (AIC and residual test. There were a total of 707 notified local DF cases from March 2006 to December 2012, with a seasonal distribution from August to November. There were a total of 65 notified imported DF cases from 20 countries, with forty-six cases (70.8% imported from Southeast Asia. The model showed that local DF cases were positively associated with mosquito density, imported cases, temperature, precipitation, vapour pressure and minimum relative humidity, whilst being negatively associated with air pressure, with different time lags.Imported DF cases and mosquito

  19. Intratumor microvessel density in biopsy specimens predicts local response of hypopharyngeal cancer to radiotherapy

    International Nuclear Information System (INIS)

    Zhang, Shi-Chuan; Miyamoto, Shin-ichi; Hasebe, Takahiro; Ishii, Genichiro; Ochiai, Atsushi; Kamijo, Tomoyuki; Hayashi, Ryuichi; Fukayama, Masashi

    2003-01-01

    The aim of this retrospective study was to identify reliable predictive factors for local control of hypopharyngeal cancer (HPC) treated by radiotherapy. A cohort of 38 patients with HPC treated by radical radiotherapy at the National Cancer Center Hospital East between 1992 and 1999 were selected as subjects for the present study. Paraffin-embedded pre-therapy biopsy specimens from these patients were used for immunostaining to evaluate the relationships between local tumor control and expression of the following previously reported predictive factors for local recurrence of head and neck cancer treated by radiotherapy: Ki-67, Cyclin D1, CDC25B, VEGF, p53, Bax and Bcl-2. The predictive power of microvessel density (MVD) in biopsy specimens and of clinicopathologic factors (age, gender and clinical tumor-node-metastasis stage) was also statistically analyzed. Twenty-five patients developed tumor recurrence at the primary site. Univariate analysis indicated better local control of tumors with high microvessel density [MVD≥median (39 vessels/field)] than with low MVD (< median, P=0.042). There were no significant associations between local control and expression of Ki-67 (P=0.467), Bcl-2 (P=0.127), Bax (P=0.242), p53 (P=0.262), Cyclin D1 (P=0.245), CDC25B (P=0.511) or VEGF (P=0.496). Clinicopathologic factors were also demonstrated to have no significant influence on local control (age, P=0.974; gender, P=0.372; T factor, P=0.602; N factor, P=0.530; Stage, P=0.499). MVD in biopsy specimens was closely correlated with local control of HPC treated by radiotherapy. (author)

  20. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  1. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  2. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  3. Quantifying confidence in density functional theory predictions of magnetic ground states

    Science.gov (United States)

    Houchins, Gregory; Viswanathan, Venkatasubramanian

    2017-10-01

    Density functional theory (DFT) simulations, at the generalized gradient approximation (GGA) level, are being routinely used for material discovery based on high-throughput descriptor-based searches. The success of descriptor-based material design relies on eliminating bad candidates and keeping good candidates for further investigation. While DFT has been widely successfully for the former, oftentimes good candidates are lost due to the uncertainty associated with the DFT-predicted material properties. Uncertainty associated with DFT predictions has gained prominence and has led to the development of exchange correlation functionals that have built-in error estimation capability. In this work, we demonstrate the use of built-in error estimation capabilities within the BEEF-vdW exchange correlation functional for quantifying the uncertainty associated with the magnetic ground state of solids. We demonstrate this approach by calculating the uncertainty estimate for the energy difference between the different magnetic states of solids and compare them against a range of GGA exchange correlation functionals as is done in many first-principles calculations of materials. We show that this estimate reasonably bounds the range of values obtained with the different GGA functionals. The estimate is determined as a postprocessing step and thus provides a computationally robust and systematic approach to estimating uncertainty associated with predictions of magnetic ground states. We define a confidence value (c-value) that incorporates all calculated magnetic states in order to quantify the concurrence of the prediction at the GGA level and argue that predictions of magnetic ground states from GGA level DFT is incomplete without an accompanying c-value. We demonstrate the utility of this method using a case study of Li-ion and Na-ion cathode materials and the c-value metric correctly identifies that GGA-level DFT will have low predictability for NaFePO4F . Further, there

  4. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  5. Impact of obesity on the predictive accuracy of prostate-specific antigen density and prostate-specific antigen in native Korean men undergoing prostate biopsy.

    Science.gov (United States)

    Kim, Jae Heon; Doo, Seung Whan; Yang, Won Jae; Lee, Kwang Woo; Lee, Chang Ho; Song, Yun Seob; Jeon, Yoon Su; Kim, Min Eui; Kwon, Soon-Sun

    2014-10-01

    To evaluate the impact of obesity on the biopsy detection of prostate cancer. We retrospectively reviewed data of 1182 consecutive Korean patients (≥50 years) with serum prostate-specific antigen levels of 3-10 ng/mL who underwent initial extended 12-cores biopsy from September 2009 to March 2013. Patients who took medications that were likely to influence the prostate-specific antigen level were excluded. Receiver operating characteristic curves were plotted for prostate-specific antigen and prostate-specific antigen density predicting cancer status among non-obese and obese men. A total of 1062 patients (mean age 67.1 years) were enrolled in the analysis. A total of 230 men (21.7%) had a positive biopsy. In the overall study sample, the area under the receiver operator characteristic curve of serum prostate-specific antigen for predicting prostate cancer on biopsy were 0.584 and 0.633 for non-obese and obese men, respectively (P = 0.234). However, the area under the curve for prostate-specific antigen density in predicting cancer status showed a significant difference (non-obese 0.696, obese 0.784; P = 0.017). There seems to be a significant difference in the ability of prostate-specific antigen density to predict biopsy results between non-obese and obese men. Obesity positively influenced the overall ability of prostate-specific antigen density to predict prostate cancer. © 2014 The Japanese Urological Association.

  6. Predictive simulations of radio frequency heated plasmas of Tore Supra using the Multi-Mode model

    International Nuclear Information System (INIS)

    Voitsekhovitch, Irina; Bateman, Glenn; Kritz, Arnold H.; Pankin, Alexei

    2002-01-01

    Multichannel integrated predictive simulations using the Multi-Mode transport model are carried out for radio frequency heated Tore Supra tokamak discharges in which helium is the primary ion component. Lower hybrid heated discharges in which the total current is driven noninductively [X. Litaudon et al., Plasma Phys. Controlled Fusion 43, 677 (2001)] and a discharge with ion cyclotron radio frequency heating of the hydrogen minority ions [G. T. Hoang et al., Nucl. Fusion 38, 117 (1998)] are simulated. The simulations of these discharges represent the first test of the Multi-Mode model in helium plasmas with dominant electron heating. Also for the first time, the particle transport in Tore Supra discharges is computed and the density profiles are predicted self-consistently with other transport channels. It is found in these simulations that the anomalous transport driven by trapped electron mode turbulence is dominant compared to the transport driven by the ion temperature gradient turbulence. The feature of the Multi-Mode model to calculate the impurity transport self-consistently with other transport channels is used in this study to predict the influence of carbon impurity influx on the discharge evolution

  7. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  8. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  9. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  10. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  11. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    Energy Technology Data Exchange (ETDEWEB)

    Grimm, Lars J., E-mail: Lars.grimm@duke.edu; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie [Department of Radiology, Duke University Medical Center, Box 3808, Durham, North Carolina 27710 (United States); Kuzmiak, Cherie M. [Department of Radiology, University of North Carolina School of Medicine, 2006 Old Clinic, CB No. 7510, Chapel Hill, North Carolina 27599 (United States); Mazurowski, Maciej A. [Duke University Medical Center, Box 2731 Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  12. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features.

    Science.gov (United States)

    Grimm, Lars J; Ghate, Sujata V; Yoon, Sora C; Kuzmiak, Cherie M; Kim, Connie; Mazurowski, Maciej A

    2014-03-01

    The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502-0.739, 95% Confidence Interval: 0.543-0.680,p errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees.

  13. Predicting error in detecting mammographic masses among radiology trainees using statistical models based on BI-RADS features

    International Nuclear Information System (INIS)

    Grimm, Lars J.; Ghate, Sujata V.; Yoon, Sora C.; Kim, Connie; Kuzmiak, Cherie M.; Mazurowski, Maciej A.

    2014-01-01

    Purpose: The purpose of this study is to explore Breast Imaging-Reporting and Data System (BI-RADS) features as predictors of individual errors made by trainees when detecting masses in mammograms. Methods: Ten radiology trainees and three expert breast imagers reviewed 100 mammograms comprised of bilateral medial lateral oblique and craniocaudal views on a research workstation. The cases consisted of normal and biopsy proven benign and malignant masses. For cases with actionable abnormalities, the experts recorded breast (density and axillary lymph nodes) and mass (shape, margin, and density) features according to the BI-RADS lexicon, as well as the abnormality location (depth and clock face). For each trainee, a user-specific multivariate model was constructed to predict the trainee's likelihood of error based on BI-RADS features. The performance of the models was assessed using area under the receive operating characteristic curves (AUC). Results: Despite the variability in errors between different trainees, the individual models were able to predict the likelihood of error for the trainees with a mean AUC of 0.611 (range: 0.502–0.739, 95% Confidence Interval: 0.543–0.680,p < 0.002). Conclusions: Patterns in detection errors for mammographic masses made by radiology trainees can be modeled using BI-RADS features. These findings may have potential implications for the development of future educational materials that are personalized to individual trainees

  14. Unexpected storm-time nightside plasmaspheric density enhancement at low L shell

    Science.gov (United States)

    Chu, X.; Bortnik, J.; Denton, R. E.; Yue, C.

    2017-12-01

    We have developed a three-dimensional dynamic electron density (DEN3D) model in the inner magnetosphere using a neural network approach. The DEN3D model can provide spatiotemporal distribution of the electron density at any location and time that spacecraft observations are not available. Given DEN3D's good performance in predicting the structure and dynamic evolution of the plasma density, the salient features of the DEN3D model can be used to gain further insight into the physics. For instance, the DEN3D models can be used to find unusual phenomena that are difficult to detect in observations or simulations. We report, for the first time, an unexpected plasmaspheric density increase at low L shell regions on the nightside during the main phase of a moderate storm during 12-16 October 2004, as opposed to the expected density decrease due to storm-time plasmaspheric erosion. The unexpected density increase is first discovered in the modeled electron density distribution using the DEN3D model, and then validated using in-situ density measurements obtained from the IMAGE satellite. The density increase was likely caused by increased earthward transverse field plasma transport due to enhanced nightside ExB drift, which coincided with enhanced solar wind electric field and substorm activity. This is consistent with the results of physics-based simulation SAMI3 model which show earthward enhanced plasma transport and electron density increase at low L shells during storm main phase.

  15. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  16. Model for the evolution of network dislocation density in irradiated metals

    International Nuclear Information System (INIS)

    Garner, F.A.; Wolfer, W.G.

    1982-01-01

    It is a well-known fact that the total dislocation density that evolves in irradiated metals is a strong function of irradiation temperature. The dislocation density comprises two components, however, and only one of these (Frank loops) retains its temperature dependence at high fluence. The network dislocation density approaches a saturation level which is relatively insensitive to starting microstructure, stress, irradiation temperature, displacement rate and helium level. The latter statement is supported in this paper by a review of published microstructural data. A model has been developed to explain the insensitivity to many variables of the saturation network dislocation density in irradiated metals. This model also explains how the rate of approach to saturation can be sensitive to displacement rate and temperature while the saturation level itself is not dependent on temperature

  17. Predicting Multicomponent Adsorption Isotherms in Open-Metal Site Materials Using Force Field Calculations Based on Energy Decomposed Density Functional Theory.

    Science.gov (United States)

    Heinen, Jurn; Burtch, Nicholas C; Walton, Krista S; Fonseca Guerra, Célia; Dubbeldam, David

    2016-12-12

    For the design of adsorptive-separation units, knowledge is required of the multicomponent adsorption behavior. Ideal adsorbed solution theory (IAST) breaks down for olefin adsorption in open-metal site (OMS) materials due to non-ideal donor-acceptor interactions. Using a density-function-theory-based energy decomposition scheme, we develop a physically justifiable classical force field that incorporates the missing orbital interactions using an appropriate functional form. Our first-principles derived force field shows greatly improved quantitative agreement with the inflection points, initial uptake, saturation capacity, and enthalpies of adsorption obtained from our in-house adsorption experiments. While IAST fails to make accurate predictions, our improved force field model is able to correctly predict the multicomponent behavior. Our approach is also transferable to other OMS structures, allowing the accurate study of their separation performances for olefins/paraffins and further mixtures involving complex donor-acceptor interactions. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Variation of level density parameter with angular momentum in 119Sb

    International Nuclear Information System (INIS)

    Aggarwal, Mamta; Kailas, S.

    2015-01-01

    Nuclear level density (NLD), a basic ingredient of Statistical Model has been a subject of interest for various decades as it plays an important role in the understanding of a wide variety of Nuclear reactions. There have been various efforts towards the precise determination of NLD and study its dependence on excitation energy and angular momentum as it is crucial in the determination of cross-sections. Here we report our results of theoretical calculations in a microscopic framework to understand the experimental results on inverse level density parameter (k) extracted for different angular momentum regions for 119 Sb corresponding to different γ-ray multiplicities by comparing the experimental neutron energy spectra with statistical model predictions where an increase in the level density with the increasing angular momentum is predicted. NLD and neutron emission spectra dependence on temperature and spin has been studied in our earlier works where the influence of structural transitions due to angular momentum and temperature on level density of states and neutron emission probability was shown

  19. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  20. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  1. Increasing the Accuracy of Mapping Urban Forest Carbon Density by Combining Spatial Modeling and Spectral Unmixing Analysis

    Directory of Open Access Journals (Sweden)

    Hua Sun

    2015-11-01

    Full Text Available Accurately mapping urban vegetation carbon density is challenging because of complex landscapes and mixed pixels. In this study, a novel methodology was proposed that combines a linear spectral unmixing analysis (LSUA with a linear stepwise regression (LSR, a logistic model-based stepwise regression (LMSR and k-Nearest Neighbors (kNN, to map the forest carbon density of Shenzhen City of China, using Landsat 8 imagery and sample plot data collected in 2014. The independent variables that contributed to statistically significantly improving the fit of a model to data and reducing the sum of squared errors were first selected from a total of 284 spectral variables derived from the image bands. The vegetation fraction from LSUA was then added as an independent variable. The results obtained using cross-validation showed that: (1 Compared to the methods without the vegetation information, adding the vegetation fraction increased the accuracy of mapping carbon density by 1%–9.3%; (2 As the observed values increased, the LSR and kNN residuals showed overestimates and underestimates for the smaller and larger observations, respectively, while LMSR improved the systematical over and underestimations; (3 LSR resulted in illogically negative and unreasonably large estimates, while KNN produced the greatest values of root mean square error (RMSE. The results indicate that combining the spatial modeling method LMSR and the spectral unmixing analysis LUSA, coupled with Landsat imagery, is most promising for increasing the accuracy of urban forest carbon density maps. In addition, this method has considerable potential for accurate, rapid and nondestructive prediction of urban and peri-urban forest carbon stocks with an acceptable level of error and low cost.

  2. Model dependence of isospin sensitive observables at high densities

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Wen-Mei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Science, Huzhou Teachers College, Huzhou 313000 (China); Yong, Gao-Chan, E-mail: yonggaochan@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Wang, Yongjia [School of Science, Huzhou Teachers College, Huzhou 313000 (China); School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Li, Qingfeng [School of Science, Huzhou Teachers College, Huzhou 313000 (China); Zhang, Hongfei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China); Zuo, Wei [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-07

    Within two different frameworks of isospin-dependent transport model, i.e., Boltzmann–Uehling–Uhlenbeck (IBUU04) and Ultrarelativistic Quantum Molecular Dynamics (UrQMD) transport models, sensitive probes of nuclear symmetry energy are simulated and compared. It is shown that neutron to proton ratio of free nucleons, π{sup −}/π{sup +} ratio as well as isospin-sensitive transverse and elliptic flows given by the two transport models with their “best settings”, all have obvious differences. Discrepancy of numerical value of isospin-sensitive n/p ratio of free nucleon from the two models mainly originates from different symmetry potentials used and discrepancies of numerical value of charged π{sup −}/π{sup +} ratio and isospin-sensitive flows mainly originate from different isospin-dependent nucleon–nucleon cross sections. These demonstrations call for more detailed studies on the model inputs (i.e., the density- and momentum-dependent symmetry potential and the isospin-dependent nucleon–nucleon cross section in medium) of isospin-dependent transport model used. The studies of model dependence of isospin sensitive observables can help nuclear physicists to pin down the density dependence of nuclear symmetry energy through comparison between experiments and theoretical simulations scientifically.

  3. A theoretical-electron-density databank using a model of real and virtual spherical atoms.

    Science.gov (United States)

    Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian

    2017-08-01

    A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.

  4. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  5. Incorporating wind availability into land use regression modelling of air quality in mountainous high-density urban environment.

    Science.gov (United States)

    Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward

    2017-08-01

    Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  7. Goethite surface reactivity: III. Unifying arsenate adsorption behavior through a variable crystal face - Site density model

    Science.gov (United States)

    Salazar-Camacho, Carlos; Villalobos, Mario

    2010-04-01

    We developed a model that describes quantitatively the arsenate adsorption behavior for any goethite preparation as a function of pH and ionic strength, by using one basic surface arsenate stoichiometry, with two affinity constants. The model combines a face distribution-crystallographic site density model for goethite with tenets of the Triple Layer and CD-MUSIC surface complexation models, and is self-consistent with its adsorption behavior towards protons, electrolytes, and other ions investigated previously. Five different systems of published arsenate adsorption data were used to calibrate the model spanning a wide range of chemical conditions, which included adsorption isotherms at different pH values, and adsorption pH-edges at different As(V) loadings, both at different ionic strengths and background electrolytes. Four additional goethite-arsenate systems reported with limited characterization and adsorption data were accurately described by the model developed. The adsorption reaction proposed is: lbond2 FeOH +lbond2 SOH +AsO43-+H→lbond2 FeOAsO3[2-]…SOH+HO where lbond2 SOH is an adjacent surface site to lbond2 FeOH; with log K = 21.6 ± 0.7 when lbond2 SOH is another lbond2 FeOH, and log K = 18.75 ± 0.9, when lbond2 SOH is lbond2 Fe 2OH. An additional small contribution of a protonated complex was required to describe data at low pH and very high arsenate loadings. The model considered goethites above 80 m 2/g as ideally composed of 70% face (1 0 1) and 30% face (0 0 1), resulting in a site density for lbond2 FeOH and for lbond2 Fe 3OH of 3.125/nm 2 each. Below 80 m 2/g surface capacity increases progressively with decreasing area, which was modeled by considering a progressively increasing proportion of faces (0 1 0)/(1 0 1), because face (0 1 0) shows a much higher site density of lbond2 FeOH groups. Computation of the specific proportion of faces, and thus of the site densities for the three types of crystallographic surface groups present in

  8. Modelling high density phenomena in hydrogen fibre Z-pinches

    International Nuclear Information System (INIS)

    Chittenden, J.P.

    1990-09-01

    The application of hydrogen fibre Z-pinches to the study of the radiative collapse phenomenon is studied computationally. Two areas of difficulty, the formation of a fully ionized pinch from a cryogenic fibre and the processes leading to collapse termination, are addressed in detail. A zero-D model based on the energy equation highlights the importance of particle end losses and changes in the Coulomb logarithm upon collapse initiation and termination. A 1-D Lagrangian resistive MHD code shows the importance of the changing radial profile shapes, particularly in delaying collapse termination. A 1-D, three fluid MHD code is developed to model the ionization of the fibre by thermal conduction from a high temperature surface corona to the cold core. Rate equations for collisional ionization, 3-body recombination and equilibration are solved in tandem with fluid equations for the electrons, ions and neutrals. Continuum lowering is found to assist ionization at the corona-core interface. The high density plasma phenomena responsible for radiative collapse termination are identified as the self-trapping of radiation and free electron degeneracy. A radiation transport model and computational analogues for the effects of degeneracy upon the equation of state, transport coefficients and opacity are implemented in the 1-D, single fluid model. As opacity increases the emergent spectrum is observed to become increasingly Planckian and a fall off in radiative cooling at small radii and low frequencies occurs giving rise to collapse termination. Electron degeneracy terminates radiative collapse by supplementing the radial pressure gradient until the electromagnetic pinch force is balanced. Collapse termination is found to be a hybrid process of opacity and degeneracy effects across a wide range of line densities with opacity dominant at large line densities but with electron degeneracy becoming increasingly important at lower line densities. (author)

  9. Prediction Model of the Outer Radiation Belt Developed by Chungbuk National University

    Directory of Open Access Journals (Sweden)

    Dae-Kyu Shin

    2014-12-01

    Full Text Available The Earth’s outer radiation belt often suffers from drastic changes in the electron fluxes. Since the electrons can be a potential threat to satellites, efforts have long been made to model and predict electron flux variations. In this paper, we describe a prediction model for the outer belt electrons that we have recently developed at Chungbuk National University. The model is based on a one-dimensional radial diffusion equation with observationally determined specifications of a few major ingredients in the following way. First, the boundary condition of the outer edge of the outer belt is specified by empirical functions that we determine using the THEMIS satellite observations of energetic electrons near the boundary. Second, the plasmapause locations are specified by empirical functions that we determine using the electron density data of THEMIS. Third, the model incorporates the local acceleration effect by chorus waves into the one-dimensional radial diffusion equation. We determine this chorus acceleration effect by first obtaining an empirical formula of chorus intensity as a function of drift shell parameter L*, incorporating it as a source term in the one-dimensional diffusion equation, and lastly calibrating the term to best agree with observations of a certain interval. We present a comparison of the model run results with and without the chorus acceleration effect, demonstrating that the chorus effect has been incorporated into the model to a reasonable degree.

  10. Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling

    Science.gov (United States)

    Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.

    2012-12-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data

  11. Predictive ability of genomic selection models for breeding value estimation on growth traits of Pacific white shrimp Litopenaeus vannamei

    Science.gov (United States)

    Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai

    2017-09-01

    Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.

  12. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  13. Realistic microscopic level densities for spherical nuclei

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    Nuclear level densities play an important role in nuclear reactions such as the formation of the compound nucleus. We develop a microscopic calculation of the level density based on a combinatorial evaluation from a realistic single-particle level scheme. This calculation makes use of a fast Monte Carlo algorithm allowing us to consider large shell model spaces which could not be treated previously in combinatorial approaches. Since our model relies on a microscopic basis, it can be applied to exotic nuclei with more confidence than the commonly used semiphenomenological formuals. An exhaustive comparison of our predicted neutron s-wave resonance spacings with experimental data for a wide range of nuclei is presented

  14. Detecting reduced bone mineral density from dental radiographs using statistical shape models

    NARCIS (Netherlands)

    Allen, P.D.; Graham, J.; Farnell, D.J.J.; Harrison, E.J.; Jacobs, R.; Nicopoulou-Karyianni, K.; Lindh, C.; van der Stelt, P.F.; Horner, K.; Devlin, H.

    2007-01-01

    We describe a novel method of estimating reduced bone mineral density (BMD) from dental panoramic tomograms (DPTs), which show the entire mandible. Careful expert width measurement of the inferior mandibular cortex has been shown to be predictive of BMD in hip and spine osteopenia and osteoporosis.

  15. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Chemical theory and modelling through density across length scales

    International Nuclear Information System (INIS)

    Ghosh, Swapan K.

    2016-01-01

    One of the concepts that has played a major role in the conceptual as well as computational developments covering all the length scales of interest in a number of areas of chemistry, physics, chemical engineering and materials science is the concept of single-particle density. Density functional theory has been a versatile tool for the description of many-particle systems across length scales. Thus, in the microscopic length scale, an electron density based description has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. Density concept has been used in the form of single particle number density in the intermediate mesoscopic length scale to obtain an appropriate picture of the equilibrium and dynamical processes, dealing with a wide class of problems involving interfacial science and soft condensed matter. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related property density functions has been found to be quite appropriate. The basic ideas underlying the versatile uses of the concept of density in the theory and modelling of materials and phenomena, as visualized across length scales, along with selected illustrative applications to some recent areas of research on hydrogen energy, soft matter, nucleation phenomena, isotope separation, and separation of mixture in condensed phase, will form the subject matter of the talk. (author)

  17. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  18. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  19. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  20. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Discerning the neutron density distribution of 208Pb from nucleon elastic scattering

    International Nuclear Information System (INIS)

    Karataglidis, S.; Amos, K.; University of Melbourne, VIC; Brown, B.A.; Deb, P.K.

    2001-01-01

    We seek a measure of the neutron density of 208 Pb from analyses of intermediate energy nucleon elastic scattering. The pertinent model for such analyses is based on coordinate space nonlocal optical potentials obtained from model nuclear ground state densities. As a calibration of the use of Skyrme-Hartree-Fock models the elastic scattering from 40 Cawas considered as well. Those potentials give predictions of integral observables and of angular distributions which show sensitivity to the neutron density. When compared with experiment, and correlated with analyses of electron scattering data, the results suggest that 208 Pb has a neutron skin thickness ∼ 0.17 fm

  2. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  3. Non-invasive prediction of hemodynamically significant coronary artery stenoses by contrast density difference in coronary CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Hell, Michaela M., E-mail: michaela.hell@uk-erlangen.de [Department of Cardiology, University of Erlangen (Germany); Dey, Damini [Department of Biomedical Sciences, Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Taper Building, Room A238, 8700 Beverly Boulevard, Los Angeles, CA 90048 (United States); Marwan, Mohamed; Achenbach, Stephan; Schmid, Jasmin; Schuhbaeck, Annika [Department of Cardiology, University of Erlangen (Germany)

    2015-08-15

    Highlights: • Overestimation of coronary lesions by coronary computed tomography angiography and subsequent unnecessary invasive coronary angiography and revascularization is a concern. • Differences in plaque characteristics and contrast density difference between hemodynamically significant and non-significant stenoses, as defined by invasive fractional flow reserve, were assessed. • At a threshold of ≥24%, contrast density difference predicted hemodynamically significant lesions with a specificity of 75%, sensitivity of 33%, PPV of 35% and NPV of 73%. • The determination of contrast density difference required less time than transluminal attenuation gradient measurement. - Abstract: Objectives: Coronary computed tomography angiography (CTA) allows the detection of obstructive coronary artery disease. However, its ability to predict the hemodynamic significance of stenoses is limited. We assessed differences in plaque characteristics and contrast density difference between hemodynamically significant and non-significant stenoses, as defined by invasive fractional flow reserve (FFR). Methods: Lesion characteristics of 59 consecutive patients (72 lesions) in whom invasive FFR was performed in at least one coronary artery with moderate to high-grade stenoses in coronary CTA were evaluated by two experienced readers. Coronary CTA data sets were acquired on a second-generation dual-source CT scanner using retrospectively ECG-gated spiral acquisition or prospectively ECG-triggered axial acquisition mode. Plaque volume and composition (non-calcified, calcified), remodeling index as well as contrast density difference (defined as the percentage decline in luminal CT attenuation/cross-sectional area over the lesion) were assessed using a semi-automatic software tool (Autoplaq). Additionally, the transluminal attenuation gradient (defined as the linear regression coefficient between intraluminal CT attenuation and length from the ostium) was determined

  4. Predicting electroporation of cells in an inhomogeneous electric field based on mathematical modeling and experimental CHO-cell permeabilization to propidium iodide determination.

    Science.gov (United States)

    Dermol, Janja; Miklavčič, Damijan

    2014-12-01

    High voltage electric pulses cause electroporation of the cell membrane. Consequently, flow of the molecules across the membrane increases. In our study we investigated possibility to predict the percentage of the electroporated cells in an inhomogeneous electric field on the basis of the experimental results obtained when cells were exposed to a homogeneous electric field. We compared and evaluated different mathematical models previously suggested by other authors for interpolation of the results (symmetric sigmoid, asymmetric sigmoid, hyperbolic tangent and Gompertz curve). We investigated the density of the cells and observed that it has the most significant effect on the electroporation of the cells while all four of the mathematical models yielded similar results. We were able to predict electroporation of cells exposed to an inhomogeneous electric field based on mathematical modeling and using mathematical formulations of electroporation probability obtained experimentally using exposure to the homogeneous field of the same density of cells. Models describing cell electroporation probability can be useful for development and presentation of treatment planning for electrochemotherapy and non-thermal irreversible electroporation. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  6. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  7. Early changes of parotid density and volume predict modifications at the end of therapy and intensity of acute xerostomia.

    Science.gov (United States)

    Belli, Maria Luisa; Scalco, Elisa; Sanguineti, Giuseppe; Fiorino, Claudio; Broggi, Sara; Dinapoli, Nicola; Ricchetti, Francesco; Valentini, Vincenzo; Rizzo, Giovanna; Cattaneo, Giovanni Mauro

    2014-10-01

    To quantitatively assess the predictive power of early variations of parotid gland volume and density on final changes at the end of therapy and, possibly, on acute xerostomia during IMRT for head-neck cancer. Data of 92 parotids (46 patients) were available. Kinetics of the changes during treatment were described by the daily rate of density (rΔρ) and volume (rΔvol) variation based on weekly diagnostic kVCT images. Correlation between early and final changes was investigated as well as the correlation with prospective toxicity data (CTCAEv3.0) collected weekly during treatment for 24/46 patients. A higher rΔρ was observed during the first compared to last week of treatment (-0,50 vs -0,05HU, p-value = 0.0001). Based on early variations, a good estimation of the final changes may be obtained (Δρ: AUC = 0.82, p = 0.0001; Δvol: AUC = 0.77, p = 0.0001). Both early rΔρ and rΔvol predict a higher "mean" acute xerostomia score (≥ median value, 1.57; p-value = 0.01). Median early density rate changes for patients with mean xerostomia score ≥ / xerostomia is well predicted by higher rΔρ and rΔvol in the first two weeks of treatment: best cut-off values were -0.50 HU/day and -380 mm(3)/day for rΔρ and rΔvol respectively. Further studies are necessary to definitively assess the potential of early density/volume changes in identifying more sensitive patients at higher risk of experiencing xerostomia.

  8. Population Density Modeling for Diverse Land Use Classes: Creating a National Dasymetric Worker Population Model

    Science.gov (United States)

    Trombley, N.; Weber, E.; Moehl, J.

    2017-12-01

    Many studies invoke dasymetric mapping to make more accurate depictions of population distribution by spatially restricting populations to inhabited/inhabitable portions of observational units (e.g., census blocks) and/or by varying population density among different land classes. LandScan USA uses this approach by restricting particular population components (such as residents or workers) to building area detected from remotely sensed imagery, but also goes a step further by classifying each cell of building area in accordance with ancillary land use information from national parcel data (CoreLogic, Inc.'s ParcelPoint database). Modeling population density according to land use is critical. For instance, office buildings would have a higher density of workers than warehouses even though the latter would likely have more cells of detection. This paper presents a modeling approach by which different land uses are assigned different densities to more accurately distribute populations within them. For parts of the country where the parcel data is insufficient, an alternate methodology is developed that uses National Land Cover Database (NLCD) data to define the land use type of building detection. Furthermore, LiDAR data is incorporated for many of the largest cities across the US, allowing the independent variables to be updated from two-dimensional building detection area to total building floor space. In the end, four different regression models are created to explain the effect of different land uses on worker distribution: A two-dimensional model using land use types from the parcel data A three-dimensional model using land use types from the parcel data A two-dimensional model using land use types from the NLCD data, and A three-dimensional model using land use types from the NLCD data. By and large, the resultant coefficients followed intuition, but importantly allow the relationships between different land uses to be quantified. For instance, in the model

  9. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  11. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  12. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  13. Monte Carlo neutral density calculations for ELMO Bumpy Torus

    International Nuclear Information System (INIS)

    Davis, W.A.; Colchin, R.J.

    1986-11-01

    The steady-state nature of the ELMO Bumpy Torus (EBT) plasma implies that the neutral density at any point inside the plasma volume will determine the local particle confinement time. This paper describes a Monte Carlo calculation of three-dimensional atomic and molecular neutral density profiles in EBT. The calculation has been done using various models for neutral source points, for launching schemes, for plasma profiles, and for plasma densities and temperatures. Calculated results are compared with experimental observations - principally spectroscopic measurements - both for guidance in normalization and for overall consistency checks. Implications of the predicted neutral profiles for the fast-ion-decay measurement of neutral densities are also addressed

  14. An analysis of the gradient-induced electric fields and current densities in human models when situated in a hybrid MRI-LINAC system

    International Nuclear Information System (INIS)

    Liu, Limei; Trakic, Adnan; Sanchez-Lopez, Hector; Liu, Feng; Crozier, Stuart

    2014-01-01

    MRI-LINAC is a new image-guided radiotherapy treatment system that combines magnetic resonance imaging (MRI) with a linear accelerator (LINAC) in a single unit. One drawback is that the pulsing of the split gradient coils of the system induces an electric field and currents in the patient which need to be predicted and evaluated for patient safety. In this novel numerical study the in situ electric fields and associated current densities were evaluated inside tissue-accurate male and female human voxel models when a number of different split-geometry gradient coils were operated. The body models were located in the MRI-LINAC system along the axial and radial directions in three different body positions. Each model had a region of interest (ROI) suitable for image-guided radiotherapy. The simulation results show that the amplitudes and distributions of the field and current density induced by different split x-gradient coils were similar with one another in the ROI of the body model, but varied outside of the region. The fields and current densities induced by a split classic coil with the surface unconnected showed the largest deviation from those given by the conventional non-split coils. Another finding indicated that the distributions of the peak current densities varied when the body position, orientation or gender changed, while the peak electric fields mainly occurred in the skin and fat tissues. (paper)

  15. Experimental evidence that density dependence strongly influences plant invasions through fragmented landscapes.

    Science.gov (United States)

    Williams, Jennifer L; Levine, Jonathan M

    2018-04-01

    Populations of range expanding species encounter patches of both favorable and unfavorable habitat as they spread across landscapes. Theory shows that increasing patchiness slows the spread of populations modeled with continuously varying population density when dispersal is not influence by the environment or individual behavior. However, as is found in uniformly favorable landscapes, spread remains driven by fecundity and dispersal from low density individuals at the invasion front. In contrast, when modeled populations are composed of discrete individuals, patchiness causes populations to build up to high density before dispersing past unsuitable habitat, introducing an important influence of density dependence on spread velocity. To test the hypothesized interaction between habitat patchiness and density dependence, we simultaneously manipulated these factors in a greenhouse system of annual plants spreading through replicated experimental landscapes. We found that increasing the size of gaps and amplifying the strength of density dependence both slowed spread velocity, but contrary to predictions, the effect of amplified density dependence was similar across all landscape types. Our results demonstrate that the discrete nature of individuals in spreading populations has a strong influence on how both landscape patchiness and density dependence influence spread through demographic and dispersal stochasticity. Both finiteness and landscape structure should be critical components to theoretical predictions of future spread for range expanding native species or invasive species colonizing new habitat. © 2018 by the Ecological Society of America.

  16. Extrapolating cetacean densities to quantitatively assess human impacts on populations in the high seas.

    Science.gov (United States)

    Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N

    2017-06-01

    As human activities expand beyond national jurisdictions to the high seas, there is an increasing need to consider anthropogenic impacts to species inhabiting these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. We developed plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extended from a well-surveyed region within the U.S. Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa with available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we mapped where predictions were made outside sampled covariate ranges, examined alternate models, and compared predicted densities with maps of sightings from sources that could not be integrated into our models. Confidence levels in model results depended on the taxon and geographic area and highlighted the need for additional surveying in environmentally distinct areas. With application of necessary caution, our density estimates can inform management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, and deep-sea mining and be used to delineate areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic regions for which management will be implemented but data are sparse. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  17. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  18. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  19. Construction of a voxel model from CT images with density derived from CT numbers

    International Nuclear Information System (INIS)

    Cheng Mengyun; Zeng Qin; Cao Ruifen; Li Gui; Zheng Huaqing; Huang Shanqing; Song Gang; Wu Yican

    2011-01-01

    The voxel models representing human anatomy have been developed to calculate dose distribution in human body, while the density and elemental composition are the most important physical properties of voxel model. Usually, when creating the Monte Carlo input files, the average tissue densities recommended in ICRP Publication were used to assign each voxel in the existing voxel models. As each tissue consists of many voxels with different densities, the conventional method of average tissue densities failed to take account of the voxel's discrepancy, and therefore could not represent human anatomy faithfully. To represent human anatomy more faithfully, a method was implemented to assign each voxel, the densities of which were derived from CT number. In order to compare with the traditional method, we constructed two models from the cadaver specimen dataset. A CT-based pelvic voxel model called Pelvis-CT model was constructed, the densities of which were derived from the CT numbers. A color photograph-based pelvic voxel model called Pelvis-Photo model was also constructed, the densities of which were taken from ICRP Publication. The CT images and the color photographs were obtained from the same female cadaver specimen. The Pelvis-CT and Pelvis-Photo models were both ported into Monte Carlo code MCNP to calculate the conversion coefficients from kerma free-in-air to absorbed dose for external monoenergetic photon beams with energies of 0.1, 1 and 10 MeV under anterior-posterior (AP) geometry. The results were compared with those of given in ICRP Publication 74. Differences of up to 50% were observed between conversion coefficients of Pelvis-CT and Pelvis- Photo models, moreover the discrepancies decreased for the photon beams with higher energies. The overall trend of conversion coefficients of the Pelvis-CT model agreed well with that of ICRP Publication 74 data. (author)

  20. Continuum corrections to the level density and its dependence on excitation energy, n-p asymmetry, and deformation

    International Nuclear Information System (INIS)

    Charity, R.J.; Sobotka, L.G.

    2005-01-01

    In the independent-particle model, the nuclear level density is determined from the neutron and proton single-particle level densities. The single-particle level density for the positive-energy continuum levels is important at high excitation energies for stable nuclei and at all excitation energies for nuclei near the drip lines. This single-particle level density is subdivided into compound-nucleus and gas components. Two methods are considered for this subdivision: In the subtraction method, the single-particle level density is determined from the scattering phase shifts. In the Gamov method, only the narrow Gamov states or resonances are included. The level densities calculated with these two methods are similar; both can be approximated by the backshifted Fermi-gas expression with level-density parameters that are dependent on A, but with very little dependence on the neutron or proton richness of the nucleus. However, a small decrease in the level-density parameter is predicted for some nuclei very close to the drip lines. The largest difference between the calculations using the two methods is the deformation dependence of the level density. The Gamov method predicts a very strong peaking of the level density at sphericity for high excitation energies. This leads to a suppression of deformed configurations and, consequently, the fission rate predicted by the statistical model is reduced in the Gamov method