WorldWideScience

Sample records for model predicts density

  1. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  2. A Model of Foam Density Prediction for Expanded Perlite Composites

    Directory of Open Access Journals (Sweden)

    Arifuzzaman Md

    2015-01-01

    Full Text Available Multiple sets of variables associated with expanded perlite particle consolidation in foam manufacturing were analyzed to develop a model for predicting perlite foam density. The consolidation of perlite particles based on the flotation method and compaction involves numerous variables leading to the final perlite foam density. The variables include binder content, compaction ratio, perlite particle size, various perlite particle densities and porosities, and various volumes of perlite at different stages of process. The developed model was found to be useful not only for prediction of foam density but also for optimization between compaction ratio and binder content to achieve a desired density. Experimental verification was conducted using a range of foam densities (0.15 – 0.5 g/cm3 produced with a range of compaction ratios (1.5 – 3.5, a range of sodium silicate contents (0.05 – 0.35 g/ml in dilution, a range of expanded perlite particle sizes (1 – 4 mm, and various perlite densities (such as skeletal, material, bulk, and envelope densities. A close agreement between predictions and experimental results was found.

  3. Nuclear level density predictions

    Directory of Open Access Journals (Sweden)

    Bucurescu Dorel

    2015-01-01

    Full Text Available Simple formulas depending only on nuclear masses were previously proposed for the parameters of the Back-Shifted Fermi Gas (BSFG model and of the Constant Temperature (CT model of the nuclear level density, respectively. They are now applied for the prediction of the level density parameters of all nuclei with available masses. Both masses from the new 2012 mass table and from different models are considered and the predictions are discussed in connection with nuclear regions most affected by shell corrections and nuclear structure effects and relevant for the nucleosynthesis.

  4. EXACT MINIMAX ESTIMATION OF THE PREDICTIVE DENSITY IN SPARSE GAUSSIAN MODELS.

    Science.gov (United States)

    Mukherjee, Gourab; Johnstone, Iain M

    We consider estimating the predictive density under Kullback-Leibler loss in an ℓ0 sparse Gaussian sequence model. Explicit expressions of the first order minimax risk along with its exact constant, asymptotically least favorable priors and optimal predictive density estimates are derived. Compared to the sparse recovery results involving point estimation of the normal mean, new decision theoretic phenomena are seen. Suboptimal performance of the class of plug-in density estimates reflects the predictive nature of the problem and optimal strategies need diversification of the future risk. We find that minimax optimal strategies lie outside the Gaussian family but can be constructed with threshold predictive density estimates. Novel minimax techniques involving simultaneous calibration of the sparsity adjustment and the risk diversification mechanisms are used to design optimal predictive density estimates.

  5. All Recent Mars Landers Have Landed Downrange - Are Mars Atmosphere Models Mis-Predicting Density?

    Science.gov (United States)

    Desai, Prasun N.

    2008-01-01

    All recent Mars landers (Mars Pathfinder, the two Mars Exploration Rovers Spirit and Opportunity, and the Mars Phoenix Lander) have landed further downrange than their pre-entry predictions. Mars Pathfinder landed 27 km downrange of its prediction [1], Spirit and Opportunity landed 13.4 km and 14.9 km, respectively, downrange from their predictions [2], and Phoenix landed 21 km downrange from its prediction [3]. Reconstruction of their entries revealed a lower density profile than the best a priori atmospheric model predictions. Do these results suggest that there is a systemic issue in present Mars atmosphere models that predict a higher density than observed on landing day? Spirit Landing: The landing location for Spirit was 13.4 km downrange of the prediction as shown in Fig. 1. The navigation errors upon Mars arrival were very small [2]. As such, the entry interface conditions were not responsible for this downrange landing. Consequently, experiencing a lower density during the entry was the underlying cause. The reconstructed density profile that Spirit experienced is shown in Fig. 2, which is plotted as a fraction of the pre-entry baseline prediction that was used for all the entry, descent, and landing (EDL) design analyses. The reconstructed density is observed to be less dense throughout the descent reaching a maximum reduction of 15% at 21 km. This lower density corresponded to approximately a 1- low profile relative to the dispersions predicted. Nearly all the deceleration during the entry occurs within 10- 50 km. As such, prediction of density within this altitude band is most critical for entry flight dynamics analyses and design (e.g., aerodynamic and aerothermodynamic predictions, landing location, etc.).

  6. A model for the evolution of large density perturbations - Normalization and predictions. [In universe

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))

    1991-01-01

    The nonlinear evolution of matter density fluctuations in the universe is studied. The Zeldovich solution is applied to the quasi-linear regime, and a model to stop the fluctuations from growing in the very nonlinear regime is considered. The model is based in the virialization of collapsing pancakes. The density contrast of a typical pancake at the time it starts to relax is given for universes with different values of Omega. With this model, it is possible to calculate the probability density of the final density fluctuations. Results on the normalization of the power spectrum of the initial density fluctuations are given as a function of Omega. Predictions of the model on the filling factor of superclusters and voids are compared with observations. 37 refs.

  7. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  8. Model Predictive Control with Integral Action for Current Density Profile Tracking in NSTX-U

    Science.gov (United States)

    Ilhan, Z. O.; Wehner, W. P.; Schuster, E.; Boyer, M. D.

    2016-10-01

    Active control of the toroidal current density profile may play a critical role in non-inductively sustained long-pulse, high-beta scenarios in a spherical torus (ST) configuration, which is among the missions of the NSTX-U facility. In this work, a previously developed physics-based control-oriented model is embedded in a feedback control scheme based on a model predictive control (MPC) strategy to track a desired current density profile evolution specified indirectly by a desired rotational transform profile. An integrator is embedded into the standard MPC formulation to reject various modeling uncertainties and external disturbances. Neutral beam powers, electron density, and total plasma current are used as actuators. The proposed MPC strategy incorporates various state and actuator constraints directly into the control design process by solving a constrained optimization problem in real-time to determine the optimal actuator requests. The effectiveness of the proposed controller in regulating the current density profile in NSTX-U is demonstrated in closed-loop nonlinear simulations. Supported by the US DOE under DE-AC02-09CH11466.

  9. Evaluating the effect of Tikhonov regularization schemes on predictions in a variable‐density groundwater model

    Science.gov (United States)

    White, Jeremy T.; Langevin, Christian D.; Hughes, Joseph D.

    2010-01-01

    Calibration of highly‐parameterized numerical models typically requires explicit Tikhonovtype regularization to stabilize the inversion process. This regularization can take the form of a preferred parameter values scheme or preferred relations between parameters, such as the preferred equality scheme. The resulting parameter distributions calibrate the model to a user‐defined acceptable level of model‐to‐measurement misfit, and also minimize regularization penalties on the total objective function. To evaluate the potential impact of these two regularization schemes on model predictive ability, a dataset generated from a synthetic model was used to calibrate a highly-parameterized variable‐density SEAWAT model. The key prediction is the length of time a synthetic pumping well will produce potable water. A bi‐objective Pareto analysis was used to explicitly characterize the relation between two competing objective function components: measurement error and regularization error. Results of the Pareto analysis indicate that both types of regularization schemes affect the predictive ability of the calibrated model.

  10. Predictions of Taylor's power law, density dependence and pink noise from a neutrally modeled time series.

    Science.gov (United States)

    Keil, Petr; Herben, Tomás; Rosindell, James; Storch, David

    2010-07-07

    There has recently been increasing interest in neutral models of biodiversity and their ability to reproduce the patterns observed in nature, such as species abundance distributions. Here we investigate the ability of a neutral model to predict phenomena observed in single-population time series, a study complementary to most existing work that concentrates on snapshots in time of the whole community. We consider tests for density dependence, the dominant frequencies of population fluctuation (spectral density) and a relationship between the mean and variance of a fluctuating population (Taylor's power law). We simulated an archipelago model of a set of interconnected local communities with variable mortality rate, migration rate, speciation rate, size of local community and number of local communities. Our spectral analysis showed 'pink noise': a departure from a standard random walk dynamics in favor of the higher frequency fluctuations which is partly consistent with empirical data. We detected density dependence in local community time series but not in metacommunity time series. The slope of the Taylor's power law in the model was similar to the slopes observed in natural populations, but the fit to the power law was worse. Our observations of pink noise and density dependence can be attributed to the presence of an upper limit to community sizes and to the effect of migration which distorts temporal autocorrelation in local time series. We conclude that some of the phenomena observed in natural time series can emerge from neutral processes, as a result of random zero-sum birth, death and migration. This suggests the neutral model would be a parsimonious null model for future studies of time series data.

  11. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    Science.gov (United States)

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  12. A Novel Creep-Fatigue Life Prediction Model for P92 Steel on the Basis of Cyclic Strain Energy Density

    Science.gov (United States)

    Ji, Dongmei; Ren, Jianxing; Zhang, Lai-Chang

    2016-09-01

    A novel creep-fatigue life prediction model was deduced based on an expression of the strain energy density in this study. In order to obtain the expression of the strain energy density, the load-controlled creep-fatigue (CF) tests of P92 steel at 873 K were carried out. Cyclic strain of P92 steel under CF load was divided into elastic strain, applying and unloading plastic strain, creep strain, and anelastic strain. Analysis of cyclic strain indicates that the damage process of P92 steel under CF load consists of three stages, similar to pure creep. According to the characteristics of the strains above, an expression was defined to describe the strain energy density for each cycle. The strain energy density at stable stage is inversely proportional to the total strain energy density dissipated by P92 steel. However, the total strain energy densities under different test conditions are proportional to the fatigue life. Therefore, the expression of the strain energy density at stable stage was chosen to predict the fatigue life. The CF experimental data on P92 steel were employed to verify the rationality of the novel model. The model obtained from the load-controlled CF test of P92 steel with short holding time could predict the fatigue life of P92 steel with long holding time.

  13. A Novel Creep-Fatigue Life Prediction Model for P92 Steel on the Basis of Cyclic Strain Energy Density

    Science.gov (United States)

    Ji, Dongmei; Ren, Jianxing; Zhang, Lai-Chang

    2016-11-01

    A novel creep-fatigue life prediction model was deduced based on an expression of the strain energy density in this study. In order to obtain the expression of the strain energy density, the load-controlled creep-fatigue (CF) tests of P92 steel at 873 K were carried out. Cyclic strain of P92 steel under CF load was divided into elastic strain, applying and unloading plastic strain, creep strain, and anelastic strain. Analysis of cyclic strain indicates that the damage process of P92 steel under CF load consists of three stages, similar to pure creep. According to the characteristics of the strains above, an expression was defined to describe the strain energy density for each cycle. The strain energy density at stable stage is inversely proportional to the total strain energy density dissipated by P92 steel. However, the total strain energy densities under different test conditions are proportional to the fatigue life. Therefore, the expression of the strain energy density at stable stage was chosen to predict the fatigue life. The CF experimental data on P92 steel were employed to verify the rationality of the novel model. The model obtained from the load-controlled CF test of P92 steel with short holding time could predict the fatigue life of P92 steel with long holding time.

  14. Electron-Ion Dynamics with Time-Dependent Density Functional Theory: Towards Predictive Solar Cell Modeling: Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Maitra, Neepa [Hunter College City University of New York, New York, NY (United States)

    2016-07-14

    This project investigates the accuracy of currently-used functionals in time-dependent density functional theory, which is today routinely used to predict and design materials and computationally model processes in solar energy conversion. The rigorously-based electron-ion dynamics method developed here sheds light on traditional methods and overcomes challenges those methods have. The fundamental research undertaken here is important for building reliable and practical methods for materials discovery. The ultimate goal is to use these tools for the computational design of new materials for solar cell devices of high efficiency.

  15. DWCox: A density-weighted Cox model for outlier-robust prediction of prostate cancer survival

    OpenAIRE

    Jinfeng Xiao; Sheng Wang; Jingbo Shang; Henry Lin; Doris Xin; Xiang Ren; Jiawei Han; Jian Peng

    2016-01-01

    Reliable predictions on the risk and survival time of prostate cancer patients based on their clinical records can help guide their treatment and provide hints about the disease mechanism. The Cox regression is currently a commonly accepted approach for such tasks in clinical applications. More complex methods, like ensemble approaches, have the potential of reaching better prediction accuracy at the cost of increased training difficulty and worse result interpretability. Better performance o...

  16. Model comparison on genomic predictions using high-density markers for different groups of bulls in the Nordic Holstein population.

    Science.gov (United States)

    Gao, H; Su, G; Janss, L; Zhang, Y; Lund, M S

    2013-07-01

    This study compared genomic predictions based on imputed high-density markers (~777,000) in the Nordic Holstein population using a genomic BLUP (GBLUP) model, 4 Bayesian exponential power models with different shape parameters (0.3, 0.5, 0.8, and 1.0) for the exponential power distribution, and a Bayesian mixture model (a mixture of 4 normal distributions). Direct genomic values (DGV) were estimated for milk yield, fat yield, protein yield, fertility, and mastitis, using deregressed proofs (DRP) as response variable. The validation animals were split into 4 groups according to their genetic relationship with the training population. Groupsmgs had both the sire and the maternal grandsire (MGS), Groupsire only had the sire, Groupmgs only had the MGS, and Groupnon had neither the sire nor the MGS in the training population. Reliability of DGV was measured as the squared correlation between DGV and DRP divided by the reliability of DRP for the bulls in validation data set. Unbiasedness of DGV was measured as the regression of DRP on DGV. The results indicated that DGV were more accurate and less biased for animals that were more related to the training population. In general, the Bayesian mixture model and the exponential power model with shape parameter of 0.30 led to higher reliability of DGV than did the other models. The differences between reliabilities of DGV from the Bayesian models and the GBLUP model were statistically significant for some traits. We observed a tendency that the superiority of the Bayesian models over the GBLUP model was more profound for the groups having weaker relationships with training population. Averaged over the 5 traits, the Bayesian mixture model improved the reliability of DGV by 2.0 percentage points for Groupsmgs, 2.7 percentage points for Groupsire, 3.3 percentage points for Groupmgs, and 4.3 percentage points for Groupnon compared with GBLUP. The results showed that a Bayesian model with intense shrinkage of the explanatory

  17. Temperature Prediction Model for Bone Drilling Based on Density Distribution and In Vivo Experiments for Minimally Invasive Robotic Cochlear Implantation.

    Science.gov (United States)

    Feldmann, Arne; Anso, Juan; Bell, Brett; Williamson, Tom; Gavaghan, Kate; Gerber, Nicolas; Rohrbach, Helene; Weber, Stefan; Zysset, Philippe

    2016-05-01

    Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and

  18. ICC density predicts bacterial overgrowth in a rat model of post-infectious IBS

    Institute of Scientific and Technical Information of China (English)

    Sam-Ryong; Jee; Walter; Morales; Kimberly; Low; Christopher; Chang; Amy; Zhu; Venkata; Pokkunuri; Soumya; Chatterjee; Edy; Soffer; Jeffrey; L; Conklin; Mark; Pimentel

    2010-01-01

    AIM:To investigate the interstitial cells of Cajal(ICC) number using a new rat model.METHODS:Sprague-Dawley rats were assigned to two groups.The first group received gavage with Campylobacter jejuni(C.jejuni) 81-176.The second group was gavaged with placebo.Three months after clearance of Campylobacter from the stool,precise segments of duodenum,jejunum,and ileum were ligated in self-contained loops of bowel that were preserved in anaerobic bags.Deep muscular plexus ICC(DMP-ICC) were quantified by two blind...

  19. Anaerobic microbial transformation of halogenated aromatics and fate prediction using electron density modeling.

    Science.gov (United States)

    Cooper, Myriel; Wagner, Anke; Wondrousch, Dominik; Sonntag, Frank; Sonnabend, Andrei; Brehm, Martin; Schüürmann, Gerrit; Adrian, Lorenz

    2015-05-19

    Halogenated homo- and heterocyclic aromatics including disinfectants, pesticides and pharmaceuticals raise concern as persistent and toxic contaminants with often unknown fate. Remediation strategies and natural attenuation in anaerobic environments often build on microbial reductive dehalogenation. Here we describe the transformation of halogenated anilines, benzonitriles, phenols, methoxylated, or hydroxylated benzoic acids, pyridines, thiophenes, furoic acids, and benzenes by Dehalococcoides mccartyi strain CBDB1 and environmental fate modeling of the dehalogenation pathways. The compounds were chosen based on structural considerations to investigate the influence of functional groups present in a multitude of commercially used halogenated aromatics. Experimentally obtained growth yields were 0.1 to 5 × 10(14) cells mol(-1) of halogen released (corresponding to 0.3-15.3 g protein mol(-1) halogen), and specific enzyme activities ranged from 4.5 to 87.4 nkat mg(-1) protein. Chlorinated electron-poor pyridines were not dechlorinated in contrast to electron-rich thiophenes. Three different partial charge models demonstrated that the regioselective removal of halogens is governed by the least negative partial charge of the halogen. Microbial reaction pathways combined with computational chemistry and pertinent literature findings on Co(I) chemistry suggest that halide expulsion during reductive dehalogenation is initiated through single electron transfer from B12Co(I) to the apical halogen site.

  20. Bayesian Prediction Model Based on Attribute Weighting and Kernel Density Estimations

    Directory of Open Access Journals (Sweden)

    Zhong-Liang Xiang

    2015-01-01

    Full Text Available Although naïve Bayes learner has been proven to show reasonable performance in machine learning, it often suffers from a few problems with handling real world data. First problem is conditional independence; the second problem is the usage of frequency estimator. Therefore, we have proposed methods to solve these two problems revolving around naïve Bayes algorithms. By using an attribute weighting method, we have been able to handle conditional independence assumption issue, whereas, for the case of the frequency estimators, we have found a way to weaken the negative effects through our proposed smooth kernel method. In this paper, we have proposed a compact Bayes model, in which a smooth kernel augments weights on likelihood estimation. We have also chosen an attribute weighting method which employs mutual information metric to cooperate with the framework. Experiments have been conducted on UCI benchmark datasets and the accuracy of our proposed learner has been compared with that of standard naïve Bayes. The experimental results have demonstrated the effectiveness and efficiency of our proposed learning algorithm.

  1. Population Density Modeling Tool

    Science.gov (United States)

    2014-02-05

    194 POPULATION DENSITY MODELING TOOL by Davy Andrew Michael Knott David Burke 26 June 2012 Distribution...MARYLAND NAWCADPAX/TR-2012/194 26 June 2012 POPULATION DENSITY MODELING TOOL by Davy Andrew Michael Knott David Burke...Density Modeling Tool 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Davy Andrew Michael Knott David Burke 5d. PROJECT NUMBER

  2. Prediction of gas-phase thermodynamic properties for polychlorinated naphthalenes using G3X model chemistry and density functional theory.

    Science.gov (United States)

    Wang, Liming; Lv, Guowen

    2010-01-01

    The standard gas-phase enthalpies of formation of polychlorinated naphthalenes (PCNs) have been predicted using G3X model chemistry, density functional theory (DFT), and second-order Muller-Plesset (MP2) theory. Two isodesmic reactions are used for better prediction of formation enthalpies. The first (IR1) employs chlorobenzene as a reference species and the second (IR2) employs polychlorinated benzenes as reference species. Among congeners, PCNs with simultaneous Cl-substitutions at positions 1 and 8 or 4 and 5 are the least stable, where the strong repulsion between Cl-atoms leads to non-planar structures for a few PCNs. The potential energy curves for ring-wagging motions in 1,8- or 4,5-PCNs are also extremely flat in the vicinity of equilibrium conformations, leading to extremely low harmonic frequencies for the ring-wagging modes. The contributions of these ring-wagging modes to entropy, heat capacity, and thermal corrections have been calculated using the numerically evaluated energy levels. The PCN isomer patterns are discussed based on the calculated Gibbs free energies.

  3. Disagreement, Uncertainty and the True Predictive Density

    OpenAIRE

    Fabian Krüger; Ingmar Nolte

    2011-01-01

    This paper generalizes the discussion about disagreement versus uncertainty in macroeconomic survey data by emphasizing the importance of the (unknown) true predictive density. Using a forecast combination approach, we ask whether cross sections of survey point forecasts help to approximate the true predictive density. We find that although these cross-sections perform poorly individually, their inclusion into combined predictive densities can significantly improve upon densities relying sole...

  4. Density and molar volumes of imidazolium-based ionic liquid mixtures and prediction by the Jouyban-Acree model

    Science.gov (United States)

    Ghani, Noraini Abd; Sairi, Nor Asrina; Mat, Ahmad Nazeer Che; Khoubnasabjafari, Mehry; Jouyban, Abolghasem

    2016-11-01

    The density of imidazolium-based ionic liquid, 1-ethyl-3-methylimidazolium diethylphosphate with sulfolane were measured at atmospheric pressure. The experiments were performed at T= (293 - 343) K over the complete mole fractions. Physical and thermodynamic properties such as molar volumes, V0, and excess molar volumes, VE for this binary mixtures were derived from the experimental density data. The Jouyban-Acree model was exploited to correlate the physicochemical properties (PCPs) of binary mixtures at various mole fractions and temperatures.

  5. Development of TLSER model and QSAR model for predicting partition coefficients of hydrophobic organic chemicals between low density polyethylene film and water.

    Science.gov (United States)

    Liu, Huihui; Wei, Mengbi; Yang, Xianhai; Yin, Cen; He, Xiao

    2017-01-01

    Partition coefficients are vital parameters for measuring accurately the chemicals concentrations by passive sampling devices. Given the wide use of low density polyethylene (LDPE) film in passive sampling, we developed a theoretical linear solvation energy relationship (TLSER) model and a quantitative structure-activity relationship (QSAR) model for the prediction of the partition coefficient of chemicals between LDPE and water (Kpew). For chemicals with the octanol-water partition coefficient (log Kow) coefficient (R(2)) and cross-validated coefficient (Q(2)). In order to further explore the theoretical mechanisms involved in the partition process, a QSAR model with four descriptors (MLOGP (Moriguchi octanol-water partition coeff.), P_VSA_s_3 (P_VSA-like on I-state, bin 3), Hy (hydrophilic factor) and NssO (number of atoms of type ssO)) was established, and statistical analysis indicated that the model had satisfactory goodness-of-fit, robustness and predictive ability. For chemicals with log KOW>8, a TLSER model with Vx and a QSAR model with MLOGP as descriptor were developed. This is the first paper to explore the models for highly hydrophobic chemicals. The applicability domain of the models, characterized by the Euclidean distance-based method and Williams plot, covered a large number of structurally diverse chemicals, which included nearly all the common hydrophobic organic compounds. Additionally, through mechanism interpretation, we explored the structural features those governing the partition behavior of chemicals between LDPE and water.

  6. Finite element model predicts current density distribution for clinical applications of tDCS and tACS

    Directory of Open Access Journals (Sweden)

    Toralf eNeuling

    2012-09-01

    Full Text Available Transcranial direct current stimulation (tDCS has been applied in numerous scientific studies over the past decade. However, the possibility to apply tDCS in therapy of neuropsychiatric disorders is still debated. While transcranial magnetic stimulation (TMS has been approved for treatment of major depression in the United States by the Food and Drug Administration (FDA, tDCS is not as widely accepted. One of the criticisms against tDCS is the lack of spatial specificity. Focality is limited by the electrode size (35 cm2 are commonly used and the bipolar arrangement. However, a current flow through the head directly from anode to cathode is an outdated view. Finite element (FE models have recently been used to predict the exact current flow during tDCS. These simulations have demonstrated that the current flow depends on tissue shape and conductivity. Toface the challenge to predict the location, magnitude and direction of the current flow induced by tDCS and transcranial alternating current stimulation (tACS, we used a refined realistic FE modeling approach. With respect to the literature on clinical tDCS and tACS, we analyzed two common setups for the location of the stimulation electrodes which target the frontal lobe and the occipital lobe, respectively. We compared lateral and medial electrode configuration with regard to theirusability. We were able to demonstrate that the lateral configurations yielded more focused stimulation areas as well as higher current intensities in the target areas. The high resolution of our simulation allows one to combine the modeled current flow with the knowledge of neuronal orientation to predict the consequences of tDCS and tACS. Our results not only offer a basis for a deeper understanding of the stimulation sites currently in use for clinical applications but also offer a better interpretation of observed effects.

  7. Relative contributions of strain-dependent permeability and fixed charged density of proteoglycans in predicting cervical disc biomechanics: a poroelastic C5-C6 finite element model study.

    Science.gov (United States)

    Hussain, Mozammil; Natarajan, Raghu N; Chaudhary, Gulafsha; An, Howard S; Andersson, Gunnar B J

    2011-05-01

    Disc swelling pressure (P(swell)) facilitated by fixed charged density (FCD) of proteoglycans (P(fcd)) and strain-dependent permeability (P(strain)) are of critical significance in the physiological functioning of discs. FCD of proteoglycans prevents any excessive matrix deformation by tissue stiffening, whereas strain-dependent permeability limits the rate of stress transfer from fluid to solid skeleton. To date, studies involving the modeling of FCD of proteoglycans and strain-dependent permeability have not been reported for the cervical discs. The current study objective is to compare the relative contributions of strain-dependent permeability and FCD of proteoglycans in predicting cervical disc biomechanics. Three-dimensional finite element models of a C5-C6 segment with three different disc compositions were analyzed: an SPFP model (strain-dependent permeability and FCD of proteoglycans), an SP model (strain-dependent permeability alone), and an FP model (FCD of proteoglycans alone). The outcomes of the current study suggest that the relative contributions of strain-dependent permeability and FCD of proteoglycans were almost comparable in predicting the physiological behavior of the cervical discs under moment loads. However, under compression, strain-dependent permeability better predicted the in vivo disc response than that of the FCD of proteoglycans. Unlike the FP model (least stiff) in compression, motion behavior of the three models did not vary much from each other and agreed well within the standard deviations of the corresponding in vivo published data. Flexion was recorded with maximum P(fcd) and P(strain), whereas minimum values were found in extension. The study data enhance the understanding of the roles played by the FCD of proteoglycans and strain-dependent permeability and porosity in determining disc tissue swelling behavior. Degenerative changes involving strain-dependent permeability and/or loss of FCD of proteoglycans can further be

  8. Thermospheric mass density variations during geomagnetic storms and a prediction model based on the merging electric field

    NARCIS (Netherlands)

    Liu, R.; Lühr, H.; Doornbos, E.; Ma, S.Y.

    2010-01-01

    With the help of four years (2002–2005) of CHAMP accelerometer data we have investigated the dependence of low and mid latitude thermospheric density on the merging electric field, Em, during major magnetic storms. Altogether 30 intensive storm events (Dstmin <−100 nT) are chosen for a statistical s

  9. Predicting grizzly bear density in western North America.

    Directory of Open Access Journals (Sweden)

    Garth Mowat

    Full Text Available Conservation of grizzly bears (Ursus arctos is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  10. Predicting grizzly bear density in western North America.

    Science.gov (United States)

    Mowat, Garth; Heard, Douglas C; Schwarz, Carl J

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  11. Prediction of bending moment resistance of screw connected joints in plywood members using regression models and compare with that commercial medium density fiberboard (MDF and particleboard

    Directory of Open Access Journals (Sweden)

    Sadegh Maleki

    2014-11-01

    Full Text Available The study aimed at predicting bending moment resistance plywood of screw (coarse and fine threads joints using regression models. Thickness of the member was 19mm and compared with medium density fiberboard (MDF and particleboard with 18mm thicknesses. Two types of screws including coarse and fine thread drywall screw with nominal diameters of 6, 8 and 10mm and 3.5, 4 and 5 cm length respectively and sheet metal screw with diameters of 8 and 10 and length of 4 cm were used. The results of the study have shown that bending moment resistance of screw was increased by increasing of screws diameter and penetrating depth. Screw Length was found to have a larger influence on bending moment resistance than screw diameter. Bending moment resistance with coarse thread drywall screws was higher than those of fine thread drywall screws. The highest bending moment resistance (71.76 N.m was observed in joints made with coarse screw which were 5 mm in diameter and 28 mm in depth of penetration. The lowest bending moment resistance (12.08 N.m was observed in joints having fine screw with 3.5 mm diameter and 9 mm penetrations. Furthermore, bending moment resistance in plywood was higher than those of medium density fiberboard (MDF and particleboard. Finally, it has been found that the ultimate bending moment resistance of plywood joint can be predicted following formula Wc = 0.189×D0.726×P0.577 for coarse thread drywall screws and Wf = 0.086×D0.942×P0.704 for fine ones according to diameter and penetrating depth. The analysis of variance of the experimental and predicted data showed that the developed models provide a fair approximation of actual experimental measurements.

  12. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging.

    Science.gov (United States)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan Conrad

    2015-11-01

    We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: http://live.ece.utexas.edu/research/fog/index.html.

  13. Phalangeal bone mineral density predicts incident fractures

    DEFF Research Database (Denmark)

    Friis-Holmberg, Teresa; Brixen, Kim; Rubin, Katrine Hass

    2012-01-01

    This prospective study investigates the use of phalangeal bone mineral density (BMD) in predicting fractures in a cohort (15,542) who underwent a BMD scan. In both women and men, a decrease in BMD was associated with an increased risk of fracture when adjusted for age and prevalent fractures....... PURPOSE: The aim of this study was to evaluate the ability of a compact and portable scanner using radiographic absorptiometry (RA) to predict major osteoporotic fractures. METHODS: This prospective study included a cohort of 15,542 men and women aged 18-95 years, who underwent a BMD scan in Danish Health...... Examination Survey 2007-2008. BMD at the middle phalanges of the second, third and fourth digits of the non-dominant hand was measured using RA (Alara MetriScan®). These data were merged with information on incident fractures retrieved from the Danish National Patient Registry comprising the International...

  14. Model comparison on genomic predictions using high-density markers for different groups of bulls in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Su, Guosheng; Janss, Luc

    2013-01-01

    that the superiority of the Bayesian models over the GBLUP model was more profound for the groups having weaker relationships with training population. Averaged over the 5 traits, the Bayesian mixture model improved the reliability of DGV by 2.0 percentage points for Groupsmgs, 2.7 percentage points for Groupsire, 3...... relationship with the training population. Groupsmgs had both the sire and the maternal grandsire (MGS), Groupsire only had the sire, Groupmgs only had the MGS, and Groupnon had neither the sire nor the MGS in the training population. Reliability of DGV was measured as the squared correlation between DGV...... and DRP divided by the reliability of DRP for the bulls in validation data set. Unbiasedness of DGV was measured as the regression of DRP on DGV. The results indicated that DGV were more accurate and less biased for animals that were more related to the training population. In general, the Bayesian...

  15. Computational lipidology: predicting lipoprotein density profiles in human blood plasma.

    Directory of Open Access Journals (Sweden)

    Katrin Hübner

    2008-05-01

    Full Text Available Monitoring cholesterol levels is strongly recommended to identify patients at risk for myocardial infarction. However, clinical markers beyond "bad" and "good" cholesterol are needed to precisely predict individual lipid disorders. Our work contributes to this aim by bringing together experiment and theory. We developed a novel computer-based model of the human plasma lipoprotein metabolism in order to simulate the blood lipid levels in high resolution. Instead of focusing on a few conventionally used predefined lipoprotein density classes (LDL, HDL, we consider the entire protein and lipid composition spectrum of individual lipoprotein complexes. Subsequently, their distribution over density (which equals the lipoprotein profile is calculated. As our main results, we (i successfully reproduced clinically measured lipoprotein profiles of healthy subjects; (ii assigned lipoproteins to narrow density classes, named high-resolution density sub-fractions (hrDS, revealing heterogeneous lipoprotein distributions within the major lipoprotein classes; and (iii present model-based predictions of changes in the lipoprotein distribution elicited by disorders in underlying molecular processes. In its present state, the model offers a platform for many future applications aimed at understanding the reasons for inter-individual variability, identifying new sub-fractions of potential clinical relevance and a patient-oriented diagnosis of the potential molecular causes for individual dyslipidemia.

  16. Modeling density segregation in granular flow

    Science.gov (United States)

    Xiao, Hongyi; Lueptow, Richard; Umbanhowar, Paul

    2015-11-01

    A recently developed continuum-based model accurately predicts segregation in flows of granular mixtures varying in particle size by considering the interplay of advection, diffusion and segregation. In this research, we extend the domain of the model to include density driven segregation. Discrete Element Method (DEM) simulations of density bidisperse flows of mono-sized particles in a quasi-2D bounded heap were performed to determine the dependence of the density driven segregation velocity on local shear rate, particle concentration, and a segregation length which scales with the particle size and the logarithm of the density ratio. With these inputs, the model yields theoretical predictions of density segregation patterns that quantitatively match the DEM simulations over a range of density ratios (1.11-3.33) and flow rates (19.2-113.6 cm3/s). Matching experiments with various combinations of glass, steel and ceramic particles were also performed which reproduced the segregation patterns obtained in both the simulations and the theory.

  17. FleaTickRisk: a meteorological model developed to monitor and predict the activity and density of three tick species and the cat flea in Europe

    Directory of Open Access Journals (Sweden)

    Frédéric Beugnet

    2009-11-01

    Full Text Available Mathematical modelling is quite a recent tool in epidemiology. Geographical information system (GIS combined with remote sensing (data collection and analysis provide valuable models, but the integration of climatologic models in parasitology and epidemiology is less common. The aim of our model, called “FleaTickRisk”, was to use meteorological data and forecasts to monitor the activity and density of some arthropods. Our parasitological model uses the Weather Research and Forecasting (WRF meteorological model integrating biological parameters. The WRF model provides a temperature and humidity picture four times a day (at 6:00, 12:00, 18:00 and 24:00 hours. Its geographical resolution is 27 x 27 km over Europe (area between longitudes 10.5° W and 30° E and latitudes 37.75° N and 62° N. The model also provides weekly forecasts. Past data were compared and revalidated using current meteorological data generated by ground stations and weather satellites. The WRF model also includes geographical information stemming from United States Geophysical Survey biotope maps with a 30’’ spatial resolution (approximately 900 x 900 m. WRF takes into account specific climatic conditions due to valleys, altitudes, lakes and wind specificities. The biological parameters of Ixodes ricinus, Dermacentor reticulatus, Rhipicephalus sanguineus and Ctenocephalides felis felis were transformed into a matrix of activity. This activity matrix is expressed as a percentage, ranging from 0 to 100, for each interval of temperature x humidity. The activity of these arthropods is defined by their ability to infest hosts, take blood meals and reproduce. For each arthropod, the matrix was calculated using existing data collected under optimal temperature and humidity conditions, as well as the timing of the life cycle. The mathematical model integrating both the WRF model (meteorological data + geographical data and the biological matrix provides two indexes: an

  18. Prediction of bone density around orthopedic implants delivering bisphosphonate.

    Science.gov (United States)

    Stadelmann, Vincent A; Terrier, Alexandre; Gauthier, O; Bouler, J-M; Pioletti, Dominique P

    2009-06-19

    The fixation of an orthopedic implant depends strongly upon its initial stability. Peri-implant bone may resorb shortly after the surgery. This resorption is directly followed by new bone formation and implants fixation strengthening, the so-called secondary fixation. If the initial stability is not reached, the resorption continues and the implant fixation weakens, which leads to implant loosening. Studies with rats and dogs have shown that a solution to prevent peri-implant resorption is to deliver bisphosphonate from the implant surface. The aims of the study were, first, to develop a model of bone remodeling around an implant delivering bisphosphonate, second, to predict the bisphosphonate dose that would induce the maximal peri-implant bone density, and third to verify in vivo that peri-implant bone density is maximal with the calculated dose. The model consists of a bone remodeling equation and a drug diffusion equation. The change in bone density is driven by a mechanical stimulus and a drug stimulus. The drug stimulus function and the other numerical parameters were identified from experimental data. The model predicted that a dose of 0.3 microg of zoledronate on the implant would induce a maximal bone density. Implants with 0.3 microg of zoledronate were then implanted in rat femurs for 3, 6 and 9 weeks. We measured that peri-implant bone density was 4% greater with the calculated dose compared to the dose empirically described as best. The approach presented in this paper could be used in the design and analysis processes of experiments in local delivery of drug such as bisphosphonate.

  19. Evaluating Predictive Densities of US Output Growth and Inflation in a Large Macroeconomic Data Set

    OpenAIRE

    Rossi, Barbara; Sekhposyan, Tatevik

    2013-01-01

    We evaluate conditional predictive densities for U.S. output growth and inflation using a number of commonly used forecasting models that rely on a large number of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can improve or deteriorate point forecasts, they migh...

  20. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  1. Predicting the neutralino relic density in the MSSM more precisely

    CERN Document Server

    Harz, Julia; Klasen, Michael; Kovařík, Karol; Steppeler, Patrick

    2016-01-01

    The dark matter relic density being a powerful observable to constrain models of new physics, the recent experimental progress calls for more precise theoretical predictions. On the particle physics side, improvements are to be made in the calculation of the (co)annihilation cross-section of the dark matter particle. We present the project DM@NLO which aims at calculating the neutralino (co)annihilation cross-section in the MSSM including radiative corrections in QCD. In the present document, we briefly review selected results for different (co)annihilation processes. We then discuss the estimation of the associated theory uncertainty obtained by varying the renormalization scale. Finally, perspectives are discussed.

  2. True Density Prediction of Garlic Slices Dehydrated by Convection.

    Science.gov (United States)

    López-Ortiz, Anabel; Rodríguez-Ramírez, Juan; Méndez-Lagunas, Lilia

    2016-01-01

    Physiochemical parameters with constant values are employed for the mass-heat transfer modeling of the air drying process. However, structural properties are not constant under drying conditions. Empirical, semi-theoretical, and theoretical models have been proposed to describe true density (ρp). These models only consider the ideal behavior and assume a linear relationship between ρp and moisture content (X); nevertheless, some materials exhibit a nonlinear behavior of ρp as a function of X with a tendency toward being concave-down. This comportment, which can be observed in garlic and carrots, has been difficult to model mathematically. This work proposes a semi-theoretical model for predicting ρp values, taking into account the concave-down comportment that occurs at the end of the drying process. The model includes the ρs dependency on external conditions (air drying temperature (Ta)), the inside temperature of the garlic slices (Ti ), and the moisture content (X) obtained from experimental data on the drying process. Calculations show that the dry solid density (ρs ) is not a linear function of Ta, X, and Ti . An empirical correlation for ρs is proposed as a function of Ti and X. The adjustment equation for Ti is proposed as a function of Ta and X. The proposed model for ρp was validated using experimental data on the sliced garlic and was compared with theoretical and empirical models that are available in the scientific literature. Deviation between the experimental and predicted data was determined. An explanation of the nonlinear behavior of ρs and ρp in the function of X, taking into account second-order phase changes, are then presented. © 2015 Institute of Food Technologists®

  3. Dynamic Predictive Density Combinations for Large Data Sets in Economics and Finance

    NARCIS (Netherlands)

    R. Casarin (Roberto); S. Grassi (Stefano); F. Ravazzolo (Francesco); H.K. van Dijk (Herman)

    2015-01-01

    markdownabstract__Abstract__ A Bayesian nonparametric predictive model is introduced to construct time-varying weighted combinations of a large set of predictive densities. A clustering mechanism allocates these densities into a smaller number of mutually exclusive subsets. Using properties of Aitc

  4. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  5. Excess seawater nutrients, enlarged algal symbiont densities and bleaching sensitive reef locations: 2. A regional-scale predictive model for the Great Barrier Reef, Australia.

    Science.gov (United States)

    Wooldridge, Scott A; Heron, Scott F; Brodie, Jon E; Done, Terence J; Masiri, Itsara; Hinrichs, Saskia

    2017-01-15

    A spatial risk assessment model is developed for the Great Barrier Reef (GBR, Australia) that helps identify reef locations at higher or lower risk of coral bleaching in summer heat-wave conditions. The model confirms the considerable benefit of discriminating nutrient-enriched areas that contain corals with enlarged (suboptimal) symbiont densities for the purpose of identifying bleaching-sensitive reef locations. The benefit of the new system-level understanding is showcased in terms of: (i) improving early-warning forecasts of summer bleaching risk, (ii) explaining historical bleaching patterns, (iii) testing the bleaching-resistant quality of the current marine protected area (MPA) network (iv) identifying routinely monitored coral health attributes, such as the tissue energy reserves and skeletal growth characteristics (viz. density and extension rates) that correlate with bleaching resistant reef locations, and (v) targeting region-specific water quality improvement strategies that may increase reef-scale coral health and bleaching resistance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  6. Escherichia coli bacteria density in relation to turbidity, streamflow characteristics, and season in the Chattahoochee River near Atlanta, Georgia, October 2000 through September 2008—Description, statistical analysis, and predictive modeling

    Science.gov (United States)

    Lawrence, Stephen J.

    2012-01-01

    Water-based recreation—such as rafting, canoeing, and fishing—is popular among visitors to the Chattahoochee River National Recreation Area (CRNRA) in north Georgia. The CRNRA is a 48-mile reach of the Chattahoochee River upstream from Atlanta, Georgia, managed by the National Park Service (NPS). Historically, high densities of fecal-indicator bacteria have been documented in the Chattahoochee River and its tributaries at levels that commonly exceeded Georgia water-quality standards. In October 2000, the NPS partnered with the U.S. Geological Survey (USGS), State and local agencies, and non-governmental organizations to monitor Escherichia coli bacteria (E. coli) density and develop a system to alert river users when E. coli densities exceeded the U.S. Environmental Protection Agency (USEPA) single-sample beach criterion of 235 colonies (most probable number) per 100 milliliters (MPN/100 mL) of water. This program, called BacteriALERT, monitors E. coli density, turbidity, and water temperature at two sites on the Chattahoochee River upstream from Atlanta, Georgia. This report summarizes E. coli bacteria density and turbidity values in water samples collected between 2000 and 2008 as part of the BacteriALERT program; describes the relations between E. coli density and turbidity, streamflow characteristics, and season; and describes the regression analyses used to develop predictive models that estimate E. coli density in real time at both sampling sites.

  7. A Trade Study of Thermosphere Empirical Neutral Density Models

    Science.gov (United States)

    2014-08-01

    into the ram direction, and m is the satellite mass. The velocity ?⃗? equals to the satellite velocity in the corotating Earth frame ?⃗?...drag force. In a trade study we have investigated a methodology to assess performances of neutral density models in predicting orbit against a... assess overall errors in orbit prediction expected from empirical density models. They have also been adapted in an analysis tool Satellite Orbital

  8. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  9. RHOCUBE: 3D density distributions modeling code

    Science.gov (United States)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  10. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  11. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  12. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  13. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  14. Combining Predictive Densities using Nonlinear Filtering with Applications to US Economics Data

    NARCIS (Netherlands)

    M. Billio (Monica); R. Casarin (Roberto); F. Ravazzolo (Francesco); H.K. van Dijk (Herman)

    2011-01-01

    textabstractWe propose a multivariate combination approach to prediction based on a distributional state space representation of the weights belonging to a set of Bayesian predictive densities which have been obtained from alternative models. Several specifications of multivariate time-varying weigh

  15. Time-varying Combinations of Predictive Densities using Nonlinear Filtering

    NARCIS (Netherlands)

    M. Billio (Monica); R. Casarin (Roberto); F. Ravazzolo (Francesco); H.K. van Dijk (Herman)

    2012-01-01

    textabstractWe propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics

  16. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  17. One versus Two Breast Density Measures to Predict 5- and 10-Year Breast Cancer Risk.

    Science.gov (United States)

    Kerlikowske, Karla; Gard, Charlotte C; Sprague, Brian L; Tice, Jeffrey A; Miglioretti, Diana L

    2015-06-01

    One measure of Breast Imaging Reporting and Data System (BI-RADS) breast density improves 5-year breast cancer risk prediction, but the value of sequential measures is unknown. We determined whether two BI-RADS density measures improve the predictive accuracy of the Breast Cancer Surveillance Consortium 5-year risk model compared with one measure. We included 722,654 women of ages 35 to 74 years with two mammograms with BI-RADS density measures on average 1.8 years apart; 13,715 developed invasive breast cancer. We used Cox regression to estimate the relative hazards of breast cancer for age, race/ethnicity, family history of breast cancer, history of breast biopsy, and one or two density measures. We developed a risk prediction model by combining these estimates with 2000-2010 Surveillance, Epidemiology, and End Results incidence and 2010 vital statistics for competing risk of death. The two-measure density model had marginally greater discriminatory accuracy than the one-measure model (AUC, 0.640 vs. 0.635). Of 18.6% of women (134,404 of 722,654) who decreased density categories, 15.4% (20,741 of 134,404) of women whose density decreased from heterogeneously or extremely dense to a lower density category with one other risk factor had a clinically meaningful increase in 5-year risk from breast cancer risk and improves risk classification for women with risk factors and a decrease in density. A two-density model should be considered for women whose density decreases when calculating breast cancer risk. ©2015 American Association for Cancer Research.

  18. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  19. Conditional Density Models for Asset Pricing

    OpenAIRE

    Filipovic, Damir; Hughston, Lane P.; Macrina, Andrea

    2010-01-01

    We model the dynamics of asset prices and associated derivatives by consideration of the dynamics of the conditional probability density process for the value of an asset at some specified time in the future. In the case where the asset is driven by Brownian motion, an associated "master equation" for the dynamics of the conditional probability density is derived and expressed in integral form. By a "model" for the conditional density process we mean a solution to the master equation along wi...

  20. Prediction Method of Safety Mud Density in Depleted Oilfields

    Directory of Open Access Journals (Sweden)

    Yuan Jun-Liang

    2013-04-01

    Full Text Available At present, many oilfields were placed in the middle and late development period and the reservoir pressure depleted usually, resulting in more serious differential pressure sticking and drilling mud leakage both in the reservoir and cap rock. In view of this situation, a systematic prediction method of safety mud density in depleted oilfields was established. The influence of reservoir depletion on stress and strength in reservoir and cap formation were both studied and taken into the prediction of safety mud density. The research showed that the risk of differential pressure sticking and drilling mud leakage in reservoir and cap formation were both increased and they were the main prevention object in depleted oilfields drilling. The research results were used to guide the practice drilling work, the whole progress gone smoothly.

  1. Quark matter at high density based on an extended confined isospin-density-dependent mass model

    Science.gov (United States)

    Qauli, A. I.; Sulaksono, A.

    2016-01-01

    We investigate the effect of the inclusion of relativistic Coulomb terms in a confined-isospin-density-dependent-mass (CIDDM) model of strange quark matter (SQM). We found that if we include the Coulomb term in scalar density form, the SQM equation of state (EOS) at high densities is stiffer but if we include the Coulomb term in vector density form it is softer than that of the standard CIDDM model. We also investigate systematically the role of each term of the extended CIDDM model. Compared with what was reported by Chu and Chen [Astrophys. J. 780, 135 (2014)], we found the stiffness of SQM EOS is controlled by the interplay among the oscillator harmonic, isospin asymmetry and Coulomb contributions depending on the parameter's range of these terms. We have found that the absolute stable condition of SQM and the mass of 2 M⊙ pulsars can constrain the parameter of oscillator harmonic κ1≈0.53 in the case the Coulomb term is excluded. If the Coulomb term is included, for the models with their parameters are consistent with SQM absolute stability condition, the 2.0 M⊙ constraint more prefers the maximum mass prediction of the model with the scalar Coulomb term than that of the model with the vector Coulomb term. On the contrary, the high densities EOS predicted by the model with the vector Coulomb is more compatible with the recent perturbative quantum chromodynamics result [1] than that predicted by the model with the scalar Coulomb. Furthermore, we also observed the quark composition in a very high density region depends quite sensitively on the kind of Coulomb term used.

  2. The central surface density of "dark halos" predicted by MOND

    CERN Document Server

    Milgrom, Mordehai

    2009-01-01

    Prompted by the recent claim, by Donato et al., of a quasi-universal central surface density of galaxy dark matter halos, I look at what MOND has to say on the subject. MOND, indeed, predicts a quasi-universal value of this quantity for objects of all masses and of any internal structure, provided they are mostly in the Newtonian regime; i.e., that their mean acceleration is at or above a0. The predicted value is qSm, with Sm= a0/2 pi G= 138 solar masses per square parsec for the nominal value of a0, and q a constant of order 1 that depends only on the form of the MOND interpolating function. This gives in the above units log(Sm)=2.14, which is consistent with that found by Doanato et al. of 2.15+-0.2. MOND predicts, on the other hand, that this quasi-universal value is not shared by objects with much lower mean accelerations. It permits halo central surface densities that are arbitrarily small, if the mean acceleration inside the object is small enough. However, for such low-surface-density objects, MOND pre...

  3. Corrosion current density prediction in reinforced concrete by imperialist competitive algorithm.

    Science.gov (United States)

    Sadowski, Lukasz; Nikoo, Mehdi

    2014-01-01

    This study attempted to predict corrosion current density in concrete using artificial neural networks (ANN) combined with imperialist competitive algorithm (ICA) used to optimize weights of ANN. For that reason, temperature, AC resistivity over the steel bar, AC resistivity remote from the steel bar, and the DC resistivity over the steel bar are considered as input parameters and corrosion current density as output parameter. The ICA-ANN model has been compared with the genetic algorithm to evaluate its accuracy in three phases of training, testing, and prediction. The results showed that the ICA-ANN model enjoys more ability, flexibility, and accuracy.

  4. Predictive Modeling of Black Spruce (Picea mariana (Mill. B.S.P. Wood Density Using Stand Structure Variables Derived from Airborne LiDAR Data in Boreal Forests of Ontario

    Directory of Open Access Journals (Sweden)

    Bharat Pokharel

    2016-12-01

    Full Text Available Our objective was to model the average wood density in black spruce trees in representative stands across a boreal forest landscape based on relationships with predictor variables extracted from airborne light detection and ranging (LiDAR point cloud data. Increment core samples were collected from dominant or co-dominant black spruce trees in a network of 400 m2 plots distributed among forest stands representing the full range of species composition and stand development across a 1,231,707 ha forest management unit in northeastern Ontario, Canada. Wood quality data were generated from optical microscopy, image analysis, X-ray densitometry and diffractometry as employed in SilviScan™. Each increment core was associated with a set of field measurements at the plot level as well as a suite of LiDAR-derived variables calculated on a 20 × 20 m raster from a wall-to-wall coverage at a resolution of ~1 point m−2. We used a multiple linear regression approach to identify important predictor variables and describe relationships between stand structure and wood density for average black spruce trees in the stands we observed. A hierarchical classification model was then fitted using random forests to make spatial predictions of mean wood density for average trees in black spruce stands. The model explained 39 percent of the variance in the response variable, with an estimated root mean square error of 38.8 (kg·m−3. Among the predictor variables, P20 (second decile LiDAR height in m and quadratic mean diameter were most important. Other predictors describing canopy depth and cover were of secondary importance and differed according to the modeling approach. LiDAR-derived variables appear to capture differences in stand structure that reflect different constraints on growth rates, determining the proportion of thin-walled earlywood cells in black spruce stems, and ultimately influencing the pattern of variation in important wood quality attributes

  5. Model comparison for the density structure along solar prominence threads

    CERN Document Server

    Arregui, I

    2015-01-01

    Quiescent solar prominence fine structures are typically modelled as density enhancements, called threads, which occupy a fraction of a longer magnetic flux tube. The profile of the mass density along the magnetic field is however unknown and several arbitrary alternatives are employed in prominence wave studies. We present a comparison of theoretical models for the field-aligned density along prominence fine structures. We consider Lorentzian, Gaussian, and parabolic profiles. We compare their theoretical predictions for the period ratio between the fundamental transverse kink mode and the first overtone to obtain estimates for the ratio of densities between the central part of the tube and its foot-points and to assess which one would better explain observed period ratio data. Bayesian parameter inference and model comparison techniques are developed and applied. Parameter inference requires the computation of the posterior distribution for the density gradient parameter conditional on the observable period...

  6. Solubility of chlorargyrite (AgCl(cr./l.)) in water: New experimental data and a predictive model valid for a wide range of temperatures (273-873 K) and water densities (0.01-1 g·cm-3)

    Science.gov (United States)

    Akinfiev, Nikolay N.; Zotov, Alexander V.

    2016-04-01

    The solubility of chlorargyrite, AgCl(cr./l.), in pure water at 623, 673 and 753 (±2) K as a function of pressure in a wide range aqueous densities (0.01-0.7 g·cm-3) was determined using various experimental approaches. Combined theoretical quantum chemistry simulations of Ag speciation and structure with a recently developed equation of state (EoS) for aqueous neutral species (Akinfiev and Diamond, 2003) were applied to describe published and newly made AgCl(cr./l.) solubility measurements in water. The use of the employed EoS for AgCl(H2O)(aq) cluster is found out to provide a good description of the whole set of experimental measurements in a wide range of temperatures (273-753 K), water densities (0.01-0.7 g·cm-3), and pressures of 0.1-100 MPa. Also, the proposed AgCl(H2O)(aq) thermodynamic description is proved to be valid for a dense aqueous fluid (0.7-1 g·cm-3) at 273-623 K and saturation water pressure. Although silver obviously shows greater affinity to dense aqueous fluid, AgCl hydration in the vapour phase is demonstrated to be also significant. A model extrapolation to magmatic conditions predicts an appreciable silver content even in low density fluids, thus supporting the hypothesis of metal transport with vapour.

  7. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  8. The role of station density for predicting daily runoff by top-kriging interpolation in Austria

    Directory of Open Access Journals (Sweden)

    Parajka Juraj

    2015-09-01

    Full Text Available Direct interpolation of daily runoff observations to ungauged sites is an alternative to hydrological model regionalisation. Such estimation is particularly important in small headwater basins characterized by sparse hydrological and climate observations, but often large spatial variability. The main objective of this study is to evaluate predictive accuracy of top-kriging interpolation driven by different number of stations (i.e. station densities in an input dataset. The idea is to interpolate daily runoff for different station densities in Austria and to evaluate the minimum number of stations needed for accurate runoff predictions. Top-kriging efficiency is tested for ten different random samples in ten different stations densities. The predictive accuracy is evaluated by ordinary cross-validation and full-sample crossvalidations. The methodology is tested by using 555 gauges with daily observations in the period 1987-1997. The results of the cross-validation indicate that, in Austria, top-kriging interpolation is superior to hydrological model regionalisation if station density exceeds approximately 2 stations per 1000 km2 (175 stations in Austria. The average median of Nash-Sutcliffe cross-validation efficiency is larger than 0.7 for densities above 2.4 stations/1000 km2. For such densities, the variability of runoff efficiency is very small over ten random samples. Lower runoff efficiency is found for low station densities (less than 1 station/1000 km2 and in some smaller headwater basins.

  9. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    Science.gov (United States)

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  10. A neural network for predicting saturated liquid density using genetic algorithm for pure and mixed refrigerants

    Energy Technology Data Exchange (ETDEWEB)

    Mohebbi, Ali; Taheri, Mahboobeh; Soltani, Ataollah [Department of Chemical Engineering, College of Engineering, Shahid Bahonar University of Kerman, Kerman (Iran)

    2008-12-15

    In this study, a new approach for the auto-design of a neural network based on genetic algorithm (GA) has been used to predict saturated liquid density for 19 pure and 6 mixed refrigerants. The experimental data including Pitzer's acentric factor, reduced temperature and reduced saturated liquid density have been used to create a GA-ANN model. The results from the model are compared with the experimental data, Hankinson and Thomson and Riedel methods, and Spencer and Danner modification of Rackett methods. GA-ANN model is the best for the prediction of liquid density with an average of absolute percent deviation of 1.46 and 3.53 for 14 pure and 6 mixed refrigerants, respectively. (author)

  11. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  12. Density Models for Velocity Analysis of Jet Impinged CEDM Missile

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Won Ho; Kang, Tae Kyo; Cho, Yeon Ho; Chang, Sang Gyoon; Lee, Dae Hee [KEPCO EnC, Daejeon (Korea, Republic of)

    2015-05-15

    A control element drive mechanism (CEDM) can be a potential missile in the reactor head area during one of the postulated accidents. The CEDM is propelled by the high speed water jet discharged from a broken upper head nozzle. The jet expansion models to predict the missile velocity have been investigated by Kang et al. The previous work of Kang et al. showed a continuous increase in missile velocity as the CEDM missile travels. But it is not natural in that two phase flow from the nozzle break exit tends to disperse and the thrust force on the missile decreases along the distance of the travel. The jet flow also interacts with the air surrounding itself. Therefore, the density change has to be included in the estimation of the missile velocity. In this paper, two density change models of the water jet are introduced for the jet expansion models along with the distance from the nozzle break location. The first one is the direct approximation model. Two density approximation models are introduced to predict the CEDM missile velocity. For each model, the effects of the expanded jet area were included as the area ratio to the exit nozzle area. In direct approximation model, the results have showed rapid decrease in both density and missile velocity. In pressure approach model, the density change is assumed perfectly proportional to the pressure change, and the results showed relatively smooth change in both density and missile velocity comparing to the direct approximation model. Using the model developed by Kang et al.., the maximum missile velocity is about 4 times greater comparing to the pressure approach model since the density is constant as the jet density at the nozzle exit in their model. Pressure approach model has benefits in that this model adopted neither curve fitting nor extrapolation unlike the direct approximation model, and included the effects of density change which are not considered in the model developed by Kang et al. So, this model is

  13. [Rapid prediction of annual ring density of Paulownia elongate standing tress using near infrared spectroscopy].

    Science.gov (United States)

    Jiang, Ze-Hui; Wang, Yu-Rong; Fei, Ben-Hua; Fu, Feng; Hse, Chung-Yun

    2007-06-01

    Rapid prediction of annual ring density of Paulownia elongate standing trees using near infrared spectroscopy was studied. It was non-destructive to collect the samples for trees, that is, the wood cores 5 mm in diameter were unthreaded at the breast height of standing trees instead of fallen trees. Then the spectra data were collected by autoscan method of NIR. The annual ring density was determined by mercury immersion. And the models were made and analyzed by the partial least square (PLS) and full cross validation in the 350-2 500 nm wavelength range. The results showed that high coefficients were obtained between the annual ring and the NIR fitted data. The correlation coefficient of prediction model was 0.88 and 0.91 in the middle diameter and bigger diameter, respectively. Moreover, high coefficients of correlation were also obtained between annual ring density laboratory-determined and the NIR fitted data in the middle diameter of Paulownia elongate standing trees, the correlation coefficient of calibration model and prediction model were 0.90 and 0.83, and the standard errors of calibration (SEC) and standard errors of prediction(SEP) were 0.012 and 0.016, respectively. The method can simply, rapidly and non-destructively estimate the annual ring density of the Paulownia elongate standing trees close to the cutting age.

  14. Lattice Boltzmann model with nearly constant density.

    Science.gov (United States)

    Fang, Hai-ping; Wan, Rong-zheng; Lin, Zhi-fang

    2002-09-01

    An improved lattice Boltzmann model is developed to simulate fluid flow with nearly constant fluid density. The ingredient is to incorporate an extra relaxation for fluid density, which is realized by introducing a feedback equation in the equilibrium distribution functions. The pressure is dominated by the moving particles at a node, while the fluid density is kept nearly constant and explicit mass conservation is retained as well. Numerical simulation based on the present model for the (steady) plane Poiseuille flow and the (unsteady) two-dimensional Womersley flow shows a great improvement in simulation results over the previous models. In particular, the density fluctuation has been reduced effectively while achieving a relatively large pressure gradient.

  15. Combinatorial nuclear level-density model

    Energy Technology Data Exchange (ETDEWEB)

    Moller, Peter [Los Alamos National Laboratory; Aberg, Sven [LUND SWEDEN; Uhrenhoit, Henrik [LUND SWEDEN; Ickhikawa, Takatoshi [RIKEN

    2008-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: neutron separation energy level spacings, data on total level-density functions from the Oslo method and data on parity ratios.

  16. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    Swapan K Ghosh

    2003-01-01

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. In the intermediate mesoscopic length scale, an appropriate picture of the equilibrium and dynamical processes has been obtained through the single particle number density of the constituent atoms or molecules. A wide class of problems involving nanomaterials, interfacial science and soft condensed matter has been addressed using the density based theoretical formalism as well as atomistic simulation in this regime. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related density functions has been found to be quite appropriate. A unique single unified theoretical framework that emerges through the density concept at these diverse length scales and is applicable to both quantum and classical systems is the so called density functional theory (DFT) which essentially provides a vehicle to project the many-particle picture to a single particle one. Thus, the central equation for quantum DFT is a one-particle Schrödinger-like Kohn–Sham equation, while the same for classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential. Selected illustrative applications of quantum DFT to microscopic modeling of intermolecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are presented.

  17. Predicting the morphological characteristics and basic density of Eucalyptus wood using the NIRS technique

    Directory of Open Access Journals (Sweden)

    Lívia Cássia Viana

    2009-12-01

    Full Text Available This work aimed to apply the near infrared spectroscopy technique (NIRS for fast prediction of basic density and morphological characteristics of wood fibers in Eucalyptus clones. Six Eucalyptus clones aged three years were used, obtained from plantations in Cocais, Guanhães, Rio Doce and Santa Bárbara, in Minas Gerais state. The morphological characteristics of the fibers and basic density of the wood were determined by conventional methods and correlated with near infrared spectra using partial least square regression (PLS regression. Best calibration correlations were obtained in basic density prediction, with values 0.95 for correlation coefficient of cross validation (Rcv and 3.4 for ratio performance deviation (RPD, in clone 57. Fiber length can be predicted by models with Rcv ranging from 0.61 to 0.89 and standard error (SECV ranging from 0.037 to 0.079 mm. The prediction model for wood fiber width presented higher Rcv (0.82 and RPD (1.9 values in clone 1046. Best fits to estimate lumen diameter and fiber wall thickness were obtained with information from clone 1046. In some clones, the NIRS technique proved efficient to predict the anatomical properties and basic density of wood in Eucalyptus clones.

  18. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  19. Modelling density segregation in flowing bidisperse granular materials

    Science.gov (United States)

    Xiao, Hongyi; Umbanhowar, Paul B.; Ottino, Julio M.; Lueptow, Richard M.

    2016-07-01

    Preventing segregation in flowing granular mixtures is an ongoing challenge for industrial processes that involve the handling of bulk solids. A recent continuum-based modelling approach accurately predicts spatial concentration fields in a variety of flow geometries for mixtures varying in particle size. This approach captures the interplay between advection, diffusion and segregation using kinematic information obtained from experiments and/or discrete element method (DEM) simulations combined with an empirically determined relation for the segregation velocity. Here, we extend the model to include density-driven segregation, thereby validating the approach for the two important cases of practical interest. DEM simulations of density bidisperse flows of mono-sized particles in a quasi-two-dimensional-bounded heap were performed to determine the dependence of the density-driven segregation velocity on local shear rate and particle concentration. The model yields theoretical predictions of segregation patterns that quantitatively match the DEM simulations over a range of density ratios and flow rates. Matching experiments reproduce the segregation patterns and quantitative segregation profiles obtained in both the simulations and the model, thereby demonstrating that the modelling approach captures the essential physics of density-driven segregation in granular heap flow.

  20. Nuclear level density: Shell-model approach

    Science.gov (United States)

    Sen'kov, Roman; Zelevinsky, Vladimir

    2016-06-01

    Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.

  1. Density functional theory predictions of isotropic hyperfine coupling constants.

    Science.gov (United States)

    Hermosilla, L; Calle, P; García de la Vega, J M; Sieiro, C

    2005-02-17

    The reliability of density functional theory (DFT) in the determination of the isotropic hyperfine coupling constants (hfccs) of the ground electronic states of organic and inorganic radicals is examined. Predictions using several DFT methods and 6-31G, TZVP, EPR-III and cc-pVQZ basis sets are made and compared to experimental values. The set of 75 radicals here studied was selected using a wide range of criteria. The systems studied are neutral, cationic, anionic; doublet, triplet, quartet; localized, and conjugated radicals, containing 1H, 9Be, 11B, 13C, 14N, 17O, 19F, 23Na, 25Mg, 27Al, 29Si, 31P, 33S, and 35Cl nuclei. The considered radicals provide 241 theoretical hfcc values, which are compared with 174 available experimental ones. The geometries of the studied systems are obtained by theoretical optimization using the same functional and basis set with which the hfccs were calculated. Regression analysis is used as a basic and appropriate methodology for this kind of comparative study. From this analysis, we conclude that DFT predictions of the hfccs are reliable for B3LYP/TZVP and B3LYP/EPR-III combinations. Both functional/basis set scheme are the more useful theoretical tools for predicting hfccs if compared to other much more expensive methods.

  2. Predicting insect migration density and speed in the daytime convective boundary layer.

    Directory of Open Access Journals (Sweden)

    James R Bell

    Full Text Available Insect migration needs to be quantified if spatial and temporal patterns in populations are to be resolved. Yet so little ecology is understood above the flight boundary layer (i.e. >10 m where in north-west Europe an estimated 3 billion insects km(-1 month(-1 comprising pests, beneficial insects and other species that contribute to biodiversity use the atmosphere to migrate. Consequently, we elucidate meteorological mechanisms principally related to wind speed and temperature that drive variation in daytime aerial density and insect displacements speeds with increasing altitude (150-1200 m above ground level. We derived average aerial densities and displacement speeds of 1.7 million insects in the daytime convective atmospheric boundary layer using vertical-looking entomological radars. We first studied patterns of insect aerial densities and displacements speeds over a decade and linked these with average temperatures and wind velocities from a numerical weather prediction model. Generalized linear mixed models showed that average insect densities decline with increasing wind speed and increase with increasing temperatures and that the relationship between displacement speed and density was negative. We then sought to derive how general these patterns were over space using a paired site approach in which the relationship between sites was examined using simple linear regression. Both average speeds and densities were predicted remotely from a site over 100 km away, although insect densities were much noisier due to local 'spiking'. By late morning and afternoon when insects are migrating in a well-developed convective atmosphere at high altitude, they become much more difficult to predict remotely than during the early morning and at lower altitudes. Overall, our findings suggest that predicting migrating insects at altitude at distances of ≈ 100 km is promising, but additional radars are needed to parameterise spatial covariance.

  3. Predicting gully densities at sub-continental scales: a case study for the Horn of Africa

    Science.gov (United States)

    Vanmaercke, Matthias; Pelckmans, Ignace; Poesen, Jean

    2017-04-01

    Gully erosion is a major cause of land degradation in many regions, due to its negative impacts on catchment hydrology, its associated losses of land and damage to infrastructure, as well as its often major contributions to catchment sediment yields. Mitigation and prevention of gully erosion requires a good knowledge of its spatial patterns and controlling factors. However, our ability to simulate or predict this process remains currently very limited. This is especially the case for the regional scale. Whereas detailed case studies have provided important insights into the drivers of gully erosion at local scales, these findings are often difficult to upscale to larger regions. Here we utilized a simple and cheap method to predict patterns of gully density at the sub-continental scale. By means of a random sampling procedure, we mapped gully densities for over sixty study sites across the Horn of Africa, using freely available Google Earth imagery. Next, we statistically analyzed which factors best explained the observed variation in mapped gully density. Based on these findings, we constructed a multiple regression model that simulates gully density, based on topography (average slope), soil characteristics (percentage silt) and land use (NDVI-value). Although our model could benefit from further refinement, it succeeds already fairly well in simulating the patterns of gully density at sub-continental scales. Over 75% of the predicted gully densities differ less than 5% from the observed gully density, while over 90% of the predictions deviate less than 10%. Exploration of our results further showed that this methodology may be highly useful to quantify total gully erosion rates at regional and continental scales as well as the contribution of gully erosion to catchment sediment yields.

  4. Predicting the glass transition temperature as function of crosslink density and polymer interactions in rubber compounds

    Science.gov (United States)

    D'Escamard, Gabriella; De Rosa, Claudio; Auriemma, Finizia

    2016-05-01

    Crosslink sulfur density in rubber compounds and interactions in polymer blends are two of the composition elements that affect the rubber compound properties and glass transition temperature (Tg), which is a marker of polymer properties related to its applications. Natural rubber (NR), butadiene rubber (BR) and styrene-butadiene rubber (SBR) compounds were investigated using calorimetry (DSC) and dynamic mechanical analysis (DMA). The results indicate that the Di Marzio's and Schneider's Models predict with accuracy the dependence of Tg on crosslink density and composition in miscible blends, respectively, and that the two model may represent the base to study the relevant "in service" properties of real rubber compounds.

  5. Anion-radical oxygen centers in small (AgO)n clusters: density functional theory predictions

    CERN Document Server

    Trushin, Egor V

    2012-01-01

    Anion-radical form of the oxygen centers O(-) is predicted at the DFT level for small silver oxide particles having the AgO stoichiometry. Model clusters (AgO)n appear to be ferromagnetic with appreciable spin density at the oxygen centers. In contrast to these clusters, the Ag2O model cluster have no unpaired electrons in the ground state. The increased O/Ag ratio in the oxide particles is proved to be responsible for the spin density at oxygen centers.

  6. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  7. Fault prediction of fighter based on nonparametric density estimation

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhengdao; Hu Shousong

    2005-01-01

    Fighters and other complex engineering systems have many characteristics such as difficult modeling and testing, multiple working situations, and high cost. Aim at these points, a new kind of real-time fault predictor is designed based on an improved k-nearest neighbor method, which needs neither the math model of system nor the training data and prior knowledge. It can study and predict while system's running, so that it can overcome the difficulty of data acquirement. Besides, this predictor has a fast prediction speed, and the false alarm rate and missing alarm rate can be adjusted randomly. The method is simple and universalizable. The result of simulation on fighter F-16 proved the efficiency.

  8. Predicting above-ground density and distribution of small mammal prey species at large spatial scales.

    Science.gov (United States)

    Olson, Lucretia E; Squires, John R; Oakleaf, Robert J; Wallace, Zachary P; Kennedy, Patricia L

    2017-01-01

    Grassland and shrub-steppe ecosystems are increasingly threatened by anthropogenic activities. Loss of native habitats may negatively impact important small mammal prey species. Little information, however, is available on the impact of habitat variability on density of small mammal prey species at broad spatial scales. We examined the relationship between small mammal density and remotely-sensed environmental covariates in shrub-steppe and grassland ecosystems in Wyoming, USA. We sampled four sciurid and leporid species groups using line transect methods, and used hierarchical distance-sampling to model density in response to variation in vegetation, climate, topographic, and anthropogenic variables, while accounting for variation in detection probability. We created spatial predictions of each species' density and distribution. Sciurid and leporid species exhibited mixed responses to vegetation, such that changes to native habitat will likely affect prey species differently. Density of white-tailed prairie dogs (Cynomys leucurus), Wyoming ground squirrels (Urocitellus elegans), and leporids correlated negatively with proportion of shrub or sagebrush cover and positively with herbaceous cover or bare ground, whereas least chipmunks showed a positive correlation with shrub cover and a negative correlation with herbaceous cover. Spatial predictions from our models provide a landscape-scale metric of above-ground prey density, which will facilitate the development of conservation plans for these taxa and their predators at spatial scales relevant to management.

  9. Density contrast indicators in cosmological dust models

    Indian Academy of Sciences (India)

    Filipe C Mena; Reza Tavakol

    2000-10-01

    We discuss ways of quantifying structuration in relativistic cosmological settings, by employing a family of covariant density constrast indicators. We study the evolution of these indicators with time in the context of inhomogeneous Szekeres models. We find that different observers (having either different spatial locations or different indicators) see different evolutions for the density contrast, which may or may not be monotonically increasing with time. We also find that monotonicity seems to be related to the initial conditions of the model, which may be of potential interest in connection with debates regarding gravitational entropy and the arrow of time.

  10. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  11. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Non Destructive Method for Biomass Prediction Combining TLS Derived Tree Volume and Wood Density

    Directory of Open Access Journals (Sweden)

    Jan Hackenberg

    2015-04-01

    Full Text Available This paper presents a method for predicting the above ground leafless biomass of trees in a non destructive way. We utilize terrestrial laserscan data to predict the volume of the trees. Combining volume estimates with density measurements leads to biomass predictions. Thirty-six trees of three different species are analyzed: evergreen coniferous Pinus massoniana, evergreen broadleaved Erythrophleum fordii and leafless deciduous Quercus petraea. All scans include a large number of noise points; denoising procedures are presented in detail. Density values are considered to be a minor source of error in the method if applied to stem segments, as comparison to ground truth data reveals that prediction errors for the tree volumes are in accordance with biomass prediction errors. While tree compartments with a diameter larger than 10 cm can be modeled accurately, smaller ones, especially twigs with a diameter smaller than 4 cm, are often largely overestimated. Better prediction results could be achieved by applying a biomass expansion factor to the biomass of compartments with a diameter larger than 10 cm. With this second method the average prediction error for Q. petraea could be reduced from 33.84% overestimation to 3.56%. E. fordii results could also be improved reducing the average prediction error from

  13. Predicting protein-protein interactions in the post synaptic density.

    Science.gov (United States)

    Bar-shira, Ossnat; Chechik, Gal

    2013-09-01

    The post synaptic density (PSD) is a specialization of the cytoskeleton at the synaptic junction, composed of hundreds of different proteins. Characterizing the protein components of the PSD and their interactions can help elucidate the mechanism of long-term changes in synaptic plasticity, which underlie learning and memory. Unfortunately, our knowledge of the proteome and interactome of the PSD is still partial and noisy. In this study we describe a computational framework to improve the reconstruction of the PSD network. The approach is based on learning the characteristics of PSD protein interactions from a set of trusted interactions, expanding this set with data collected from large scale repositories, and then predicting novel interaction with proteins that are suspected to reside in the PSD. Using this method we obtained thirty predicted interactions, with more than half of which having supporting evidence in the literature. We discuss in details two of these new interactions, Lrrtm1 with PSD-95 and Src with Capg. The first may take part in a mechanism underlying glutamatergic dysfunction in schizophrenia. The second suggests an alternative mechanism to regulate dendritic spines maturation.

  14. Mammographic Breast Density and Common Genetic Variants in Breast Cancer Risk Prediction.

    Directory of Open Access Journals (Sweden)

    Charmaine Pei Ling Lee

    Full Text Available Known prediction models for breast cancer can potentially by improved by the addition of mammographic density and common genetic variants identified in genome-wide associations studies known to be associated with risk of the disease. We evaluated the benefit of including mammographic density and the cumulative effect of genetic variants in breast cancer risk prediction among women in a Singapore population.We estimated the risk of breast cancer using a prospective cohort of 24,161 women aged 50 to 64 from Singapore with available mammograms and known risk factors for breast cancer who were recruited between 1994 and 1997. We measured mammographic density using the medio-lateral oblique views of both breasts. Each woman's genotype for 75 SNPs was simulated based on the genotype frequency obtained from the Breast Cancer Association Consortium data and the cumulative effect was summarized by a genetic risk score (GRS. Any improvement in the performance of our proposed prediction model versus one containing only variables from the Gail model was assessed by changes in receiver-operating characteristic and predictive values.During 17 years of follow-up, 680 breast cancer cases were diagnosed. The multivariate-adjusted hazard ratios (95% confidence intervals were 1.60 (1.22-2.10, 2.20 (1.65-2.92, 2.33 (1.71-3.20, 2.12 (1.43-3.14, and 3.27 (2.24-4.76 for the corresponding mammographic density categories: 11-20cm2, 21-30cm2, 31-40cm2, 41-50cm2, 51-60cm2, and 1.10 (1.03-1.16 for GRS. At the predicted absolute 10-year risk thresholds of 2.5% and 3.0%, a model with mammographic density and GRS could correctly identify 0.9% and 0.5% more women who would develop the disease compared to a model using only the Gail variables, respectively.Mammographic density and common genetic variants can improve the discriminatory power of an established breast cancer risk prediction model among females in Singapore.

  15. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  16. Propulsion Physics Using the Chameleon Density Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will require a new theory of propulsion. Specifically one that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. The Chameleon Density Model (CDM) is one such model that could provide new paths in propulsion toward this end. The CDM is based on Chameleon Cosmology a dark matter theory; introduced by Khrouy and Weltman in 2004. Chameleon as it is hidden within known physics, where the Chameleon field represents a scalar field within and about an object; even in the vacuum. The CDM relates to density changes in the Chameleon field, where the density changes are related to matter accelerations within and about an object. These density changes in turn change how an object couples to its environment. Whereby, thrust is achieved by causing a differential in the environmental coupling about an object. As a demonstration to show that the CDM fits within known propulsion physics, this paper uses the model to estimate the thrust from a solid rocket motor. Under the CDM, a solid rocket constitutes a two body system, i.e., the changing density of the rocket and the changing density in the nozzle arising from the accelerated mass. Whereby, the interactions between these systems cause a differential coupling to the local gravity environment of the earth. It is shown that the resulting differential in coupling produces a calculated value for the thrust near equivalent to the conventional thrust model used in Sutton and Ross, Rocket Propulsion Elements. Even though imbedded in the equations are the Universe energy scale factor, the reduced Planck mass and the Planck length, which relates the large Universe scale to the subatomic scale.

  17. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  18. Modelling spatial density using continuous wavelet transforms

    Indian Academy of Sciences (India)

    D Sudheer Reddy; N Gopal Reddy; A K Anilkumar

    2013-02-01

    Due to increase in the satelite launch activities from many countries around the world the orbital debris issue has become a major concern for the space agencies to plan a collision-free orbit design. The risk of collisions is calculated using the in situ measurements and available models. Spatial density models are useful in understanding the long-term likelihood of a collision in a particular region of space and also helpful in pre-launch orbit planning. In this paper, we present a method of estimating model parameters such as number of peaks and peak locations of spatial density model using continuous wavelets. The proposed methodology was experimented with two line element data and the results are presented.

  19. Model comparison for the density structure along solar prominence threads

    Science.gov (United States)

    Arregui, I.; Soler, R.

    2015-06-01

    Context. Quiescent solar prominence fine structures are typically modelled as density enhancements, called threads, which occupy a fraction of a longer magnetic flux tube. This is justified from the spatial distribution of the imaged plasma emission or absorption of prominences at small spatial scales. The profile of the mass density along the magnetic field is unknown, however, and several arbitrary alternatives are employed in prominence wave studies. The identification and measurement of period ratios from multiple harmonics in standing transverse thread oscillations offer a remote diagnostics method to probe the density variation of these structures. Aims: We present a comparison of theoretical models for the field-aligned density along prominence fine structures. They aim to imitate density distributions in which the plasma is more or less concentrated around the centre of the magnetic flux tube. We consider Lorentzian, Gaussian, and parabolic profiles. We compare theoretical predictions based on these profiles for the period ratio between the fundamental transverse kink mode and the first overtone to obtain estimates for the density ratios between the central part of the tube and its foot-points and to assess which one would better explain observed period ratio data. Methods: Bayesian parameter inference and model comparison techniques were developed and applied. To infer the parameters, we computed the posterior distribution for the density gradient parameter that depends on the observable period ratio. The model comparison involved computing the marginal likelihood as a function of the period ratio to obtain the plausibility of each density model as a function of the observable. We also computed the Bayes factors to quantify the relative evidence for each model, given a period ratio observation. Results: A Lorentzian density profile, with plasma density concentrated around the centre of the tube, seems to offer the most plausible inversion result. A

  20. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  1. Sloppy nuclear energy density functionals: effective model reduction

    CERN Document Server

    Niksic, Tamara

    2016-01-01

    Concepts from information geometry are used to analyse parameter sensitivity for a nuclear energy density functional, representative of a class of semi-empirical functionals that start from a microscopically motivated ansatz for the density dependence of the energy of a system of protons and neutrons. It is shown that such functionals are sloppy, characterized by an exponential range of sensitivity to parameter variations. Responsive to only a few stiff parameter combinations, they exhibit an exponential decrease of sensitivity to variations of the remaining soft parameters. By interpreting the space of model predictions as a manifold embedded in the data space, with the parameters of the functional as coordinates on the manifold, it is also shown that the exponential distribution of model manifold widths corresponds to the distribution of parameter sensitivity. Using the Manifold Boundary Approximation Method, we illustrate how to systematically construct effective nuclear density functionals of successively...

  2. Predictive densities for day-ahead electricity prices using time-adaptive quantile regression

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre; Madsen, Henrik;

    2014-01-01

    is compared to that of four benchmark approaches and the well-known the generalist autoregressive conditional heteroskedasticity (GARCH) model over a three-year evaluation period. While all benchmarks are outperformed in terms of forecasting skill overall, the superiority of the semi-parametric model over......A large part of the decision-making problems actors of the power system are facing on a daily basis requires scenarios for day-ahead electricity market prices. These scenarios are most likely to be generated based on marginal predictive densities for such prices, then enhanced with a temporal...... dependence structure. A semi-parametric methodology for generating such densities is presented: it includes: (i) a time-adaptive quantile regression model for the 5%–95% quantiles; and (ii) a description of the distribution tails with exponential distributions. The forecasting skill of the proposed model...

  3. Density prediction for petroleum and derivatives by gamma-ray attenuation and artificial neural networks.

    Science.gov (United States)

    Salgado, C M; Brandão, L E B; Conti, C C; Salgado, W L

    2016-10-01

    This work presents a new methodology for density prediction of petroleum and derivatives for products' monitoring application. The approach is based on pulse height distribution pattern recognition by means of an artificial neural network (ANN). The detection system uses appropriate broad beam geometry, comprised of a (137)Cs gamma-ray source and a NaI(Tl) detector diametrically positioned on the other side of the pipe in order measure the transmitted beam. Theoretical models for different materials have been developed using MCNP-X code, which was also used to provide training, test and validation data for the ANN. 88 simulations have been carried out, with density ranging from 0.55 to 1.26gcm(-3) in order to cover the most practical situations. Validation tests have included different patterns from those used in the ANN training phase. The results show that the proposed approach may be successfully applied for prediction of density for these types of materials. The density can be automatically predicted without a prior knowledge of the actual material composition.

  4. Improving hot region prediction by parameter optimization of density clustering in PPI.

    Science.gov (United States)

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius.

  5. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  6. Teaching Chemistry with Electron Density Models

    Science.gov (United States)

    Shusterman, Gwendolyn P.; Shusterman, Alan J.

    1997-07-01

    Linus Pauling once said that a topic must satisfy two criteria before it can be taught to students. First, students must be able to assimilate the topic within a reasonable amount of time. Second, the topic must be relevant to the educational needs and interests of the students. Unfortunately, the standard general chemistry textbook presentation of "electronic structure theory", set as it is in the language of molecular orbitals, has a difficult time satisfying either criterion. Many of the quantum mechanical aspects of molecular orbitals are too difficult for most beginning students to appreciate, much less master, and the few applications that are presented in the typical textbook are too limited in scope to excite much student interest. This article describes a powerful new method for teaching students about electronic structure and its relevance to chemical phenomena. This method, which we have developed and used for several years in general chemistry (G.P.S.) and organic chemistry (A.J.S.) courses, relies on computer-generated three-dimensional models of electron density distributions, and largely satisfies Pauling's two criteria. Students find electron density models easy to understand and use, and because these models are easily applied to a broad range of topics, they successfully convey to students the importance of electronic structure. In addition, when students finally learn about orbital concepts they are better prepared because they already have a well-developed three-dimensional picture of electronic structure to fall back on. We note in this regard that the types of models we use have found widespread, rigorous application in chemical research (1, 2), so students who understand and use electron density models do not need to "unlearn" anything before progressing to more advanced theories.

  7. Exploring a new bilateral focal density asymmetry based image marker to predict breast cancer risk

    Science.gov (United States)

    Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Wang, Yunzhi; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2017-03-01

    Although breast density has been widely considered an important breast cancer risk factor, it is not very effective to predict risk of developing breast cancer in a short-term or harboring cancer in mammograms. Based on our recent studies to build short-term breast cancer risk stratification models based on bilateral mammographic density asymmetry, we in this study explored a new quantitative image marker based on bilateral focal density asymmetry to predict the risk of harboring cancers in mammograms. For this purpose, we assembled a testing dataset involving 100 positive and 100 negative cases. In each of positive case, no any solid masses are visible on mammograms. We developed a computer-aided detection (CAD) scheme to automatically detect focal dense regions depicting on two bilateral mammograms of left and right breasts. CAD selects one focal dense region with the maximum size on each image and computes its asymmetrical ratio. We used this focal density asymmetry as a new imaging marker to divide testing cases into two groups of higher and lower focal density asymmetry. The first group included 70 cases in which 62.9% are positive, while the second group included 130 cases in which 43.1% are positive. The odds ratio is 2.24. As a result, this preliminary study supported the feasibility of applying a new focal density asymmetry based imaging marker to predict the risk of having mammography-occult cancers. The goal is to assist radiologists more effectively and accurately detect early subtle cancers using mammography and/or other adjunctive imaging modalities in the future.

  8. Disentangling density-dependent dynamics using full annual cycle models and Bayesian model weight updating

    Science.gov (United States)

    Robinson, Orin J.; McGowan, Conor; Devers, Patrick K.

    2017-01-01

    Density dependence regulates populations of many species across all taxonomic groups. Understanding density dependence is vital for predicting the effects of climate, habitat loss and/or management actions on wild populations. Migratory species likely experience seasonal changes in the relative influence of density dependence on population processes such as survival and recruitment throughout the annual cycle. These effects must be accounted for when characterizing migratory populations via population models.To evaluate effects of density on seasonal survival and recruitment of a migratory species, we used an existing full annual cycle model framework for American black ducks Anas rubripes, and tested different density effects (including no effects) on survival and recruitment. We then used a Bayesian model weight updating routine to determine which population model best fit observed breeding population survey data between 1990 and 2014.The models that best fit the survey data suggested that survival and recruitment were affected by density dependence and that density effects were stronger on adult survival during the breeding season than during the non-breeding season.Analysis also suggests that regulation of survival and recruitment by density varied over time. Our results showed that different characterizations of density regulations changed every 8–12 years (three times in the 25-year period) for our population.Synthesis and applications. Using a full annual cycle, modelling framework and model weighting routine will be helpful in evaluating density dependence for migratory species in both the short and long term. We used this method to disentangle the seasonal effects of density on the continental American black duck population which will allow managers to better evaluate the effects of habitat loss and potential habitat management actions throughout the annual cycle. The method here may allow researchers to hone in on the proper form and/or strength of

  9. Assessment of two mammographic density related features in predicting near-term breast cancer risk

    Science.gov (United States)

    Zheng, Bin; Sumkin, Jules H.; Zuley, Margarita L.; Wang, Xingwei; Klym, Amy H.; Gur, David

    2012-02-01

    In order to establish a personalized breast cancer screening program, it is important to develop risk models that have high discriminatory power in predicting the likelihood of a woman developing an imaging detectable breast cancer in near-term (e.g., BIRADS), and computed mammographic density related features we compared classification performance in estimating the likelihood of detecting cancer during the subsequent examination using areas under the ROC curves (AUC). The AUCs were 0.63+/-0.03, 0.54+/-0.04, 0.57+/-0.03, 0.68+/-0.03 when using woman's age, BIRADS rating, computed mean density and difference in computed bilateral mammographic density, respectively. Performance increased to 0.62+/-0.03 and 0.72+/-0.03 when we fused mean and difference in density with woman's age. The results suggest that, in this study, bilateral mammographic tissue density is a significantly stronger (p<0.01) risk indicator than both woman's age and mean breast density.

  10. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    Science.gov (United States)

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  11. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  12. Thermospheric density model biases at the 23rd sunspot maximum

    Science.gov (United States)

    Pardini, C.; Moe, K.; Anselmo, L.

    2012-07-01

    Uncertainties in the neutral density estimation are the major source of aerodynamic drag errors and one of the main limiting factors in the accuracy of the orbit prediction and determination process at low altitudes. Massive efforts have been made over the years to constantly improve the existing operational density models, or to create even more precise and sophisticated tools. Special attention has also been paid to research more appropriate solar and geomagnetic indices. However, the operational models still suffer from weakness. Even if a number of studies have been carried out in the last few years to define the performance improvements, further critical assessments are necessary to evaluate and compare the models at different altitudes and solar activity conditions. Taking advantage of the results of a previous study, an investigation of thermospheric density model biases during the last sunspot maximum (October 1999 - December 2002) was carried out by analyzing the semi-major axis decay of four satellites: Cosmos 2265, Cosmos 2332, SNOE and Clementine. Six thermospheric density models, widely used in spacecraft operations, were analyzed: JR-71, MSISE-90, NRLMSISE-00, GOST-2004, JB2006 and JB2008. During the time span considered, for each satellite and atmospheric density model, a fitted drag coefficient was solved for and then compared with the calculated physical drag coefficient. It was therefore possible to derive the average density biases of the thermospheric models during the maximum of the 23rd solar cycle. Below 500 km, all the models overestimated the average atmospheric density by amounts varying between +7% and +20%. This was an inevitable consequence of constructing thermospheric models from density data obtained by assuming a fixed drag coefficient, independent of altitude. Because the uncertainty affecting the drag coefficient measurements was about 3% at both 200 km and 480 km of altitude, the calculated air density biases below 500 km were

  13. The pasta phase within density dependent hadronic models

    CERN Document Server

    Avancini, S S; Marinelli, J R; Peres-Menezes, D; Watanabe de Moraes, M M; Providência, C; Santos, A M

    2008-01-01

    In the present paper we investigate the onset of the pasta phase with different parametrisations of the density dependent hadronic model and compare the results with one of the usual parametrisation of the non-linear Walecka model. The influence of the scalar-isovector virtual delta meson is shown. At zero temperature two different methods are used, one based on coexistent phases and the other on the Thomas-Fermi approximation. At finite temperature only the coexistence phases method is used. npe matter with fixed proton fractions and in beta-equilibrium are studied. We compare our results with restrictions imposed on the the values of the density and pressure at the inner edge of the crust, obtained from observations of the Vela pulsar and recent isospin diffusion data from heavy-ion reactions, and with predictions from spinodal calculations.

  14. Measured Predictively by a Density-Salinity Refractometer

    Directory of Open Access Journals (Sweden)

    Simonetta Lorenzon

    2011-01-01

    Proteins are major contributors to hemolymph density, and the present study correlates the easy and low cost measure of hemolymph density by a density-salinity refractometer with the total protein concentration, measured with a colorimetric method. Moreover, the study evaluates the accuracy of the relationship and provides a conversion factor from hemolymph density to protein in seven species of crustaceans, representative of taxa far apart in the phylogenetic tree and characterized by different life habits. Measuring serum-protein concentration by using a refractometer can provide a non-destructive field method to assess crustacean populations/species protein-related modifications of physiological state without need of costly laboratory facilities and procedures.

  15. PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...

    African Journals Online (AJOL)

    methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.

  16. Predicting soil particle density from clay and soil organic matter contents

    DEFF Research Database (Denmark)

    Schjønning, Per; McBride, R.A.; Keller, T.

    2017-01-01

    Soil particle density (Dp) is an important soil property for calculating soil porosity expressions. However, many studies assume a constant value, typically 2.65Mgm−3 for arable, mineral soils. Fewmodels exist for the prediction of Dp from soil organic matter (SOM) content. We hypothesized...... of clay particles was approximately 2.86 Mg m−3, while that of sand+silt particles could be estimated to ~2.65 Mgm−3. Multiple linear regression showed that a combination of clay and SOMcontents could explain nearly 92% of the variation in measured Dp. The clay and SOMprediction equation was validated...... against a combined data set with 227 soil samples representing A, B, and C horizons from temperate North America and Europe. The new prediction equation performed better than two SOM-based models from the literature. Validation of the new clay and SOM model using the 227 soil samples gave a root mean...

  17. Estimation of volumetric breast density for breast cancer risk prediction

    Science.gov (United States)

    Pawluczyk, Olga; Yaffe, Martin J.; Boyd, Norman F.; Jong, Roberta A.

    2000-04-01

    Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the true, volumetric quantity of dense tissue in the breast. A computerized method to estimate the amount of radiographically dense tissue in the overall volume of the breast has been developed to provide an automatic, user-independent tool for breast cancer risk assessment. The procedure for volumetric density estimation consists of first correcting the image for inhomogeneity, then performing a volume density calculation. First, optical sensitometry is used to convert all images to the logarithm of relative exposure (LRE), in order to simplify the image correction operations. The field non-uniformity correction, which takes into account heel effect, inverse square law, path obliquity and intrinsic field and grid non- uniformity is obtained by imaging a spherical section PMMA phantom. The processed LRE image of the phantom is then used as a correction offset for actual mammograms. From information about the thickness and placement of the breast, as well as the parameters of a breast-like calibration step wedge placed in the mammogram, MD of the breast is calculated. Post processing and a simple calibration phantom enable user- independent, reliable and repeatable volumetric estimation of density in breast-equivalent phantoms. Initial results obtained on known density phantoms show the estimation to vary less than 5% in MD from the actual value. This can be compared to estimated mammographic density differences of 30% between the true and non-corrected values. Since a more simplistic breast density measurement based on the projected area has been shown to be a strong indicator

  18. Preliminary Studies on Predicting Models for Energy Density in Fish Body of Oreochromis niloticus%尼罗罗非鱼鱼体能量密度预测模型初探

    Institute of Scientific and Technical Information of China (English)

    刘凯; 徐东坡; 段金荣; 张敏莹; 施炜纲

    2011-01-01

    2009年7-8月于养殖群体中随机抽取93尾尼罗罗非鱼作为实验样本,逐尾测定肌肉、肝脏、性腺和肠脂能量密度等相关生化成分,选择与能量密度相关显著的生化成分作为预测因子,分别建立肌肉等组织的能量密度预测方程.结果表明,尼罗罗非鱼肌肉和肝脏能量密度与各自粗脂肪含量的相关关系均达极显著水平(P<0.01),经协方差分析后分别建立公共方程为Em=0.196 Fm+21.931(r=0.902)和E1=0.187 F1+19.697(r=.914).肠脂能量密度与其干物质含量相关极显著(P<0.01),建立雌雄公共方程为Ef=0.159 Df23.973(r=0.917).经统计分析,卵巢和精巢的理想预测因子分别为粗脂肪含量和干物质含量,分别建立预测方程为E0=0.118 F0+25.493(r=0.909)和Es=0.268 Ds+19.697(r=0.905).%Total ninety-three samples of Oreochromis niloticus were selected randomly from cultured population in July and August 2009, and related biochemical components in muscle, liver, gonad and mesenteric fat of each sample were mensurated.Some components which have significant correlation with energy density were selected as predicting factors, and then predicting equations for energy density of different tissues were established.The results showed that there were significant linear relationships between energy density and crude fat content in muscle and liver(P<0.01), by analysis of covariance on the regression equations, two common predicting equations for energy density in muscle and liver were established as following: Em = 0.196 Fm+21.931(r=0.902) and E1=0.187 F1+19.697(r=0.914).The significant linear relationship could also be found between energy density and dry mass content in mesenteric fat (P<0.01), and the correlation could be described as Ef= 0.159 Df + 23.973 (r=0.917).Through statistical analysis, crude fat content and dry mass content were selected as predicting factors respectively for ovary and spermary, and the respective predicting equation were

  19. Prediction of crack density and electrical resistance changes in indium tin oxide/polymer thin films under tensile loading

    KAUST Repository

    Mora Cordova, Angel

    2014-06-11

    We present unified predictions for the crack onset strain, evolution of crack density, and changes in electrical resistance in indium tin oxide/polymer thin films under tensile loading. We propose a damage mechanics model to quantify and predict such changes as an alternative to fracture mechanics formulations. Our predictions are obtained by assuming that there are no flaws at the onset of loading as opposed to the assumptions of fracture mechanics approaches. We calibrate the crack onset strain and the damage model based on experimental data reported in the literature. We predict crack density and changes in electrical resistance as a function of the damage induced in the films. We implement our model in the commercial finite element software ABAQUS using a user subroutine UMAT. We obtain fair to good agreement with experiments. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  20. Prediction of soil organic carbon concentration and soil bulk density of mineral soils for soil organic carbon stock estimation

    Science.gov (United States)

    Putku, Elsa; Astover, Alar; Ritz, Christian

    2016-04-01

    Soil monitoring networks provide a powerful base for estimating and predicting nation's soil status in many aspects. The datasets of soil monitoring are often hierarchically structured demanding sophisticated data analyzing methods. The National Soil Monitoring of Estonia was based on a hierarchical data sampling scheme as each of the monitoring site was divided into four transects with 10 sampling points on each transect. We hypothesized that the hierarchical structure in Estonian Soil Monitoring network data requires a multi-level mixed model approach to achieve good prediction accuracy of soil properties. We used this database to predict soil bulk density and soil organic carbon concentration of mineral soils in arable land using different statistical methods: median approach, linear regression and mixed model; additionally, random forests for SOC concentration. We compared the prediction results and selected the model with the best prediction accuracy to estimate soil organic carbon stock. The mixed model approach achieved the best prediction accuracy in both soil organic carbon (RMSE 0.22%) and bulk density (RMSE 0.09 g cm-3) prediction. Other considered methods under- or overestimated higher and lower values of soil parameters. Thus, using these predictions we calculated the soil organic carbon stock of mineral arable soils and applied the model to a specific case of Tartu County in Estonia. Average estimated SOC stock of Tartu County is 54.8 t C ha-1 and total topsoil SOC stock 1.8 Tg in humus horizon.

  1. Whole-brain grey matter density predicts balance stability irrespective of age and protects older adults from falling.

    Science.gov (United States)

    Boisgontier, Matthieu P; Cheval, Boris; van Ruitenbeek, Peter; Levin, Oron; Renaud, Olivier; Chanal, Julien; Swinnen, Stephan P

    2016-03-01

    Functional and structural imaging studies have demonstrated the involvement of the brain in balance control. Nevertheless, how decisive grey matter density and white matter microstructural organisation are in predicting balance stability, and especially when linked to the effects of ageing, remains unclear. Standing balance was tested on a platform moving at different frequencies and amplitudes in 30 young and 30 older adults, with eyes open and with eyes closed. Centre of pressure variance was used as an indicator of balance instability. The mean density of grey matter and mean white matter microstructural organisation were measured using voxel-based morphometry and diffusion tensor imaging, respectively. Mixed-effects models were built to analyse the extent to which age, grey matter density, and white matter microstructural organisation predicted balance instability. Results showed that both grey matter density and age independently predicted balance instability. These predictions were reinforced when the level of difficulty of the conditions increased. Furthermore, grey matter predicted balance instability beyond age and at least as consistently as age across conditions. In other words, for balance stability, the level of whole-brain grey matter density is at least as decisive as being young or old. Finally, brain grey matter appeared to be protective against falls in older adults as age increased the probability of losing balance in older adults with low, but not moderate or high grey matter density. No such results were observed for white matter microstructural organisation, thereby reinforcing the specificity of our grey matter findings.

  2. Fuzzy sets predict flexural strength and density of silicon nitride ceramics

    Science.gov (United States)

    Cios, Krzysztof J.; Sztandera, Leszek M.; Baaklini, George Y.; Vary, Alex

    1993-01-01

    In this work, we utilize fuzzy sets theory to evaluate and make predictions of flexural strength and density of NASA 6Y silicon nitride ceramic. Processing variables of milling time, sintering time, and sintering nitrogen pressure are used as an input to the fuzzy system. Flexural strength and density are the output parameters of the system. Data from 273 Si3N4 modulus of rupture bars tested at room temperature and 135 bars tested at 1370 C are used in this study. Generalized mean operator and Hamming distance are utilized to build the fuzzy predictive model. The maximum test error for density does not exceed 3.3 percent, and for flexural strength 7.1 percent, as compared with the errors of 1.72 percent and 11.34 percent obtained by using neural networks, respectively. These results demonstrate that fuzzy sets theory can be incorporated into the process of designing materials, such as ceramics, especially for assessing more complex relationships between the processing variables and parameters, like strength, which are governed by randomness of manufacturing processes.

  3. A new density model of Cryptomeria fortunei plantation

    Institute of Scientific and Technical Information of China (English)

    Jiang Xidian; Huang Langzeng; Chen Baohui

    2006-01-01

    According to the volume increase model of an average individual tree in a plant population and the theory of invariable final output,we put forward a new density model of plant population: V-β=ANβ+B.Here N means the stand density and V stands for average individual tree volume;A,B and β are parameters that change with growth stage.Using the density variation of standard plots of Cryptromeriafortunei plantation to verify the new model,it turns out that this model can well simulate the population density effect law of C.fortunei plantation,and it is markedly better and shows higher accuracy than the commonly used reciprocal model of density effect and secondary-effect model.Let β=1,we can obtain the reciprocal model of density effect,which means the reciprocal model of density effect is only a special case of this new model.

  4. Prediction of crosslink density of solid propellant binders. [curing of elastomers

    Science.gov (United States)

    Marsh, H. E., Jr.

    1976-01-01

    A quantitative theory is outlined which allows calculation of crosslink density of solid propellant binders from a small number of predetermined parameters such as the binder composition, the functionality distributions of the ingredients, and the extent of the curing reaction. The parameter which is partly dependent on process conditions is the extent of reaction. The proposed theoretical model is verified by independent measurement of effective chain concentration and sol and gel fractions in simple compositions prepared from model compounds. The model is shown to correlate tensile data with composition in the case of urethane-cured polyether and certain solid propellants. A formula for the branching coefficient is provided according to which if one knows the functionality distributions of the ingredients and the corresponding equivalent weights and can measure or predict the extent of reaction, he can calculate the branching coefficient of such a system for any desired composition.

  5. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  6. Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant

    Institute of Scientific and Technical Information of China (English)

    CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian

    2007-01-01

    This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.

  7. A Galvanostatic Modeling for Preparation of Electrodeposited Nanocrystalline Coatings by Control of Current Density

    Institute of Scientific and Technical Information of China (English)

    Ali Mohammad Rashidi

    2012-01-01

    The correlation between the grain size of electrodeposited coatings and the current densities was modeled by considering galvanostatic conditions. In order to test the model by experimental results, nanocrystalline (NC) nickel samples were deposited at different current densities using a Watts bath. The grain size of the deposits was evaluated by X-ray diffraction (XRD) technique. Model predictions were validated by finding a curve being the best-fit to the experimental results which were gathered from literature for different NC coatings in addition to those data measured in this research for NC nickel coatings. According to our model, the variation of grain size with the reciprocal of the current density follows a power law. A good agreement between the experimental results and model predictions was observed which indicated that the derived analytical model is applicable for producting the nanocrystalline electrodeposits with the desired grain size by controling current density.

  8. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  9. Propulsion Physics Under the Changing Density Field Model

    Science.gov (United States)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  10. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  11. Compensation in Root Water Uptake Models Combined with Three-Dimensional Root Length Density Distribution

    NARCIS (Netherlands)

    Heinen, M.

    2014-01-01

    A three-dimensional root length density distribution function is introduced that made it possible to compare two empirical uptake models with a more mechanistic uptake model. Adding a compensation component to the more empirical model resulted in predictions of root water uptake distributions

  12. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  13. Modelling and prediction of non-stationary optical turbulence behaviour

    Science.gov (United States)

    Doelman, Niek; Osborn, James

    2016-07-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument installed at the Isaac Newton Telescope at La Palma. Based on an estimate of the power spectral density function, a low order stochastic model to capture the temporal variability of r0 is proposed. The impact of this type of stochastic model on the prediction of the coherence length behaviour is shown.

  14. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  15. Predicting the relative binding affinity of mineralocorticoid receptor antagonists by density functional methods

    Science.gov (United States)

    Roos, Katarina; Hogner, Anders; Ogg, Derek; Packer, Martin J.; Hansson, Eva; Granberg, Kenneth L.; Evertsson, Emma; Nordqvist, Anneli

    2015-12-01

    In drug discovery, prediction of binding affinity ahead of synthesis to aid compound prioritization is still hampered by the low throughput of the more accurate methods and the lack of general pertinence of one method that fits all systems. Here we show the applicability of a method based on density functional theory using core fragments and a protein model with only the first shell residues surrounding the core, to predict relative binding affinity of a matched series of mineralocorticoid receptor (MR) antagonists. Antagonists of MR are used for treatment of chronic heart failure and hypertension. Marketed MR antagonists, spironolactone and eplerenone, are also believed to be highly efficacious in treatment of chronic kidney disease in diabetes patients, but is contra-indicated due to the increased risk for hyperkalemia. These findings and a significant unmet medical need among patients with chronic kidney disease continues to stimulate efforts in the discovery of new MR antagonist with maintained efficacy but low or no risk for hyperkalemia. Applied on a matched series of MR antagonists the quantum mechanical based method gave an R2 = 0.76 for the experimental lipophilic ligand efficiency versus relative predicted binding affinity calculated with the M06-2X functional in gas phase and an R2 = 0.64 for experimental binding affinity versus relative predicted binding affinity calculated with the M06-2X functional including an implicit solvation model. The quantum mechanical approach using core fragments was compared to free energy perturbation calculations using the full sized compound structures.

  16. Near infrared spectroscopic calibration models for real time monitoring of powder density.

    Science.gov (United States)

    Román-Ospino, Andrés D; Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit; Méndez, Rafael; Ortega-Zuñiga, Carlos; Muzzio, Fernando J; Romañach, Rodolfo J

    2016-10-15

    Near infrared spectroscopic (NIRS) calibration models for real time prediction of powder density (tap, bulk and consolidated) were developed for a pharmaceutical formulation. Powder density is a critical property in the manufacturing of solid oral dosages, related to critical quality attributes such as tablet mass, hardness and dissolution. The establishment of calibration techniques for powder density is highly desired towards the development of control strategies. Three techniques were evaluated to obtain the required variation in powder density for calibration sets: 1) different tap density levels (for a single component), 2) generating different strain levels in powders blends (and as consequence powder density), through a modified shear Couette Cell, and 3) applying normal forces during a compressibility test with a powder rheometer to a pharmaceutical blend. For each variation in powder density, near infrared spectra were acquired to develop partial least squares (PLS) calibration models. Test samples were predicted with a relative standard error of prediction of 0.38%, 7.65% and 0.93% for tap density (single component), shear and rheometer respectively. Spectra obtained in real time in a continuous manufacturing (CM) plant were compared to the spectra from the three approaches used to vary powder density. The calibration based on the application of different strain levels showed the greatest similarity with the blends produced in the CM plant.

  17. Improving band gap prediction in density functional theory from molecules to solids.

    Science.gov (United States)

    Zheng, Xiao; Cohen, Aron J; Mori-Sánchez, Paula; Hu, Xiangqian; Yang, Weitao

    2011-07-08

    A novel nonempirical scaling correction method is developed to tackle the challenge of band gap prediction in density functional theory. For finite systems the scaling correction largely restores the straight-line behavior of electronic energy at fractional electron numbers. The scaling correction can be generally applied to a variety of mainstream density functional approximations, leading to significant improvement in the band gap prediction. In particular, the scaled version of a modified local density approximation predicts band gaps with an accuracy consistent for systems of all sizes, ranging from atoms and molecules to solids. The scaled modified local density approximation thus provides a useful tool to quantitatively characterize the size-dependent effect on the energy gaps of nanostructures.

  18. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taking....... The predictions are based on a data set derived using a new threshold similarity. We show that distances in proteins are predicted more accurately by neural networks than by probability density functions. We show that the accuracy of the predictions can be further increased by using sequence profiles. A threading...

  19. Predicting Chemical Reactivity from the Charge Density through Gradient Bundle Analysis: Moving beyond Fukui Functions.

    Science.gov (United States)

    Morgenstern, Amanda; Wilson, Timothy R; Eberhart, M E

    2017-06-08

    Predicting chemical reactivity is a major goal of chemistry. Toward this end, atom condensed Fukui functions of conceptual density functional theory have been used to predict which atom is most likely to undergo electrophilic or nucleophilic attack, providing regioselectivity information. We show that the most probable regions for electrophilic attack within each atom can be predicted through analysis of gradient bundle volumes, a property that depends only on the charge density of the neutral molecules. We also introduce gradient bundle condensed Fukui functions to compare the stereoselectivity information obtained from gradient bundle volume analysis. We demonstrate this method using the test set of molecular fluorine, oxygen, nitrogen, carbon monoxide, and hydrogen cyanide.

  20. Modelling of density limit phenomena in toroidal helical plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, K. [National Inst. for Fusion Science, Toki, Gifu (Japan); Itoh, S.-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Giannone, L. [Max Planck Institut fuer Plasmaphysik, EURATOM-IPP Association, Garching (Germany)

    2000-03-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the W7-AS stellarator. (author)

  1. Modelling of density limit phenomena in toroidal helical plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan); Itoh, Sanae-I. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Giannone, Louis [EURATOM-IPP Association, Max Planck Institut fuer Plasmaphysik, Garching (Germany)

    2001-11-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the Wendelstein 7-AS (W7-AS) stellarator. (author)

  2. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  3. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  4. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  5. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  6. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  7. A Weakly Nonlinear Model for the Damping of Resonantly Forced Density Waves in Dense Planetary Rings

    Science.gov (United States)

    Lehmann, Marius; Schmidt, Jürgen; Salo, Heikki

    2016-10-01

    In this paper, we address the stability of resonantly forced density waves in dense planetary rings. Goldreich & Tremaine have already argued that density waves might be unstable, depending on the relationship between the ring’s viscosity and the surface mass density. In the recent paper Schmidt et al., we have pointed out that when—within a fluid description of the ring dynamics—the criterion for viscous overstability is satisfied, forced spiral density waves become unstable as well. In this case, linear theory fails to describe the damping, but nonlinearity of the underlying equations guarantees a finite amplitude and eventually a damping of the wave. We apply the multiple scale formalism to derive a weakly nonlinear damping relation from a hydrodynamical model. This relation describes the resonant excitation and nonlinear viscous damping of spiral density waves in a vertically integrated fluid disk with density dependent transport coefficients. The model consistently predicts density waves to be (linearly) unstable in a ring region where the conditions for viscous overstability are met. Sufficiently far away from the Lindblad resonance, the surface mass density perturbation is predicted to saturate to a constant value due to nonlinear viscous damping. The wave’s damping lengths of the model depend on certain input parameters, such as the distance to the threshold for viscous overstability in parameter space and the ground state surface mass density.

  8. Modeling reservoir density underflow and interflow from a chemical spill

    Science.gov (United States)

    Gu, R.; McCutcheon, S.C.; Wang, P.-F.

    1996-01-01

    An integral simulation model has been developed for understanding and simulating the process of a density current and the transport of spilled chemicals in a stratified reservoir. The model is capable of describing flow behavior and mixing mechanisms in different flow regimes (plunging flow, underflow, and interflow). It computes flow rate, velocity, flow thickness, mixing parameterized by entrainment and dilution, depths of plunging, separation and intrusion, and time of travel. The model was applied to the Shasta Reservoir in northern California during the July 1991 Sacramento River chemical spill. The simulations were used to assist in the emergency response, confirm remediation measures, and guide data collection. Spill data that were available after the emergency response are used to conduct a postaudit of the model results. Predicted flow parameters are presented and compared with observed interflow intrusion depth, travel time, and measured concentrations of spilled chemicals. In the reservoir, temperature difference between incoming river flow and ambient lake water played a dominant role during the processes of flow plunging, separation, and intrusion. With the integral approach, the gross flow behavior can be adequately described and information useful in the analysis of contaminated flow in a reservoir after a spill is provided.

  9. Highway traffic model-based density estimation

    OpenAIRE

    Morarescu, Irinel - Constantin; CANUDAS DE WIT, Carlos

    2011-01-01

    International audience; The travel time spent in traffic networks is one of the main concerns of the societies in developed countries. A major requirement for providing traffic control and services is the continuous prediction, for several minutes into the future. This paper focuses on an important ingredient necessary for the traffic forecasting which is the real-time traffic state estimation using only a limited amount of data. Simulation results illustrate the performances of the proposed ...

  10. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  11. Thermodynamic prediction of glass formation tendency, cluster-in-jellium model for metallic glasses, ab initio tight-binding calculations, and new density functional theory development for systems with strong electron correlation

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yongxin [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Solidification of liquid is a very rich and complicated field, although there is always a famous homogeneous nucleation theory in a standard physics or materials science text book. Depending on the material and processing condition, liquid may solidify to single crystalline, polycrystalline with different texture, quasi-crystalline, amorphous solid or glass (Glass is a kind of amorphous solid in general, which has short-range and medium-range order). Traditional oxide glass may easily be formed since the covalent directional bonded network is apt to be disturbed. In other words, the energy landcape of the oxide glass is so complicated that system need extremely long time to explore the whole configuration space. On the other hand, metallic liquid usually crystalize upon cooling because of the metallic bonding nature. However, Klement et.al., (1960) reported that Au-Si liquid underwent an amorphous or “glassy” phase transformation with rapid quenching. In recent two decades, bulk metallic glasses have also been found in several multicomponent alloys[Inoue et al., (2002)]. Both thermodynamic factors (e.g., free energy of various competitive phase, interfacial free energy, free energy of local clusters, etc.) and kinetic factors (e.g., long range mass transport, local atomic position rearrangement, etc.) play important roles in the metallic glass formation process. Metallic glass is fundamentally different from nanocrystalline alloys. Metallic glasses have to undergo a nucleation process upon heating in order to crystallize. Thus the short-range and medium-range order of metallic glasses have to be completely different from crystal. Hence a method to calculate the energetics of different local clusters in the undercooled liquid or glasses become important to set up a statistic model to describe metalllic glass formation. Scattering techniques like x-ray and neutron have widely been used to study the structues of metallic glasses. Meanwhile, computer simulation

  12. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  13. Compressibility of water in magma and the prediction of density crossovers in mantle differentiation.

    Science.gov (United States)

    Agee, Carl B

    2008-11-28

    Hydrous silicate melts appear to have greater compressibility relative to anhydrous melts of the same composition at low pressures (planetary differentiation. From these compression curves, crystal-liquid density crossovers are predicted for the mantles of the Earth and Mars. For the Earth, trapped dense hydrous melts may reside atop the 410km discontinuity, and, although not required to be hydrous, atop the core-mantle boundary (CMB), in accord with seismic observations of low-velocity zones in these regions. For Mars, a density crossover at the base of the upper mantle is predicted, which would produce a low-velocity zone at a depth of approximately 1200km. If perovskite is stable at the base of the Martian mantle, then density crossovers or trapped dense hydrous melts are unlikely to reside there, and long-lived, melt-induced, low-velocity regions atop the CMB are not predicted.

  14. Habitat-Based Density Models for Three Cetacean Species off Southern California Illustrate Pronounced Seasonal Differences

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2017-05-01

    Full Text Available Managing marine species effectively requires spatially and temporally explicit knowledge of their density and distribution. Habitat-based density models, a type of species distribution model (SDM that uses habitat covariates to estimate species density and distribution patterns, are increasingly used for marine management and conservation because they provide a tool for assessing potential impacts (e.g., from fishery bycatch, ship strikes, anthropogenic sound over a variety of spatial and temporal scales. The abundance and distribution of many pelagic species exhibit substantial seasonal variability, highlighting the importance of predicting density specific to the season of interest. This is particularly true in dynamic regions like the California Current, where significant seasonal shifts in cetacean distribution have been documented at coarse scales. Finer scale (10 km habitat-based density models were previously developed for many cetacean species occurring in this region, but most models were limited to summer/fall. The objectives of our study were two-fold: (1 develop spatially-explicit density estimates for winter/spring to support management applications, and (2 compare model-predicted density and distribution patterns to previously developed summer/fall model results in the context of species ecology. We used a well-established Generalized Additive Modeling framework to develop cetacean SDMs based on 20 California Cooperative Oceanic Fisheries Investigations (CalCOFI shipboard surveys conducted during winter and spring between 2005 and 2015. Models were fit for short-beaked common dolphin (Delphinus delphis delphis, Dall's porpoise (Phocoenoides dalli, and humpback whale (Megaptera novaeangliae. Model performance was evaluated based on a variety of established metrics, including the percentage of explained deviance, ratios of observed to predicted density, and visual inspection of predicted and observed distributions. Final models were

  15. Aqueous acidities of primary benzenesulfonamides: Quantum chemical predictions based on density functional theory and SMD.

    Science.gov (United States)

    Aidas, Kęstutis; Lanevskij, Kiril; Kubilius, Rytis; Juška, Liutauras; Petkevičius, Daumantas; Japertas, Pranas

    2015-11-05

    Aqueous pK(a) of selected primary benzenesulfonamides are predicted in a systematic manner using density functional theory methods and the SMD solvent model together with direct and proton exchange thermodynamic cycles. Some test calculations were also performed using high-level composite CBS-QB3 approach. The direct scheme generally does not yield a satisfactory agreement between calculated and measured acidities due to a severe overestimation of the Gibbs free energy changes of the gas-phase deprotonation reaction by the used exchange-correlation functionals. The relative pK(a) values calculated using proton exchange method compare to experimental data very well in both qualitative and quantitative terms, with a mean absolute error of about 0.4 pK(a) units. To achieve this accuracy, we find it mandatory to perform geometry optimization of the neutral and anionic species in the gas and solution phases separately, because different conformations are stabilized in these two cases. We have attempted to evaluate the effect of the conformer-averaged free energies in the pK(a) predictions, and the general conclusion is that this procedure is highly too costly as compared with the very small improvement we have gained.

  16. Precise Prediction of the Dark Matter Relic Density within the MSSM

    Science.gov (United States)

    Harz, J.; Herrmann, B.; Klasen, M.; Kovarik, K.; Steppeler, P.

    With the latest Planck results the dark matter relic density is determined to an unprecedented precision. In order to reduce current theoretical uncertainties in the dark matter relic density prediction, we have calculated next-to-leading order SUSY-QCD corrections to neutralino (co)annihilation processes including Coulomb enhancement effects. We demonstrate that these corrections can have significant impact on the cosmologically favoured MSSM parameter space and are thus of general interest for parameter studies and global fits.

  17. Precise Prediction of the Dark Matter Relic Density within the MSSM

    CERN Document Server

    Harz, Julia; Klasen, Michael; Kovarik, Karol; Steppeler, Patrick

    2015-01-01

    With the latest Planck results the dark matter relic density is determined to an unprecedented precision. In order to reduce current theoretical uncertainties in the dark matter relic density prediction, we have calculated next-to-leading order SUSY-QCD corrections to neutralino (co)annihilation processes including Coulomb enhancement effects. We demonstrate that these corrections can have significant impact on the cosmologically favoured MSSM parameter space and are thus of general interest for parameter studies and global fits.

  18. Patch size and isolation predict plant species density in a naturally fragmented forest.

    Science.gov (United States)

    Munguía-Rosas, Miguel A; Montiel, Salvador

    2014-01-01

    Studies of the effects of patch size and isolation on plant species density have yielded contrasting results. However, much of the available evidence comes from relatively recent anthropogenic forest fragments which have not reached equilibrium between extinction and immigration. This is a critical issue because the theory clearly states that only when equilibrium has been reached can the number of species be accurately predicted by habitat size and isolation. Therefore, species density could be better predicted by patch size and isolation in an ecosystem that has been fragmented for a very long time. We tested whether patch area, isolation and other spatial variables explain variation among forest patches in plant species density in an ecosystem where the forest has been naturally fragmented for long periods of time on a geological scale. Our main predictions were that plant species density will be positively correlated with patch size, and negatively correlated with isolation (distance to the nearest patch, connectivity, and distance to the continuous forest). We surveyed the vascular flora (except lianas and epiphytes) of 19 forest patches using five belt transects (50×4 m each) per patch (area sampled per patch = 0.1 ha). As predicted, plant species density was positively associated (logarithmically) with patch size and negatively associated (linearly) with patch isolation (distance to the nearest patch). Other spatial variables such as patch elevation and perimeter, did not explain among-patch variability in plant species density. The power of patch area and isolation as predictors of plant species density was moderate (together they explain 43% of the variation), however, a larger sample size may improve the explanatory power of these variables. Patch size and isolation may be suitable predictors of long-term plant species density in terrestrial ecosystems that are naturally and anthropogenically fragmented.

  19. Patch size and isolation predict plant species density in a naturally fragmented forest.

    Directory of Open Access Journals (Sweden)

    Miguel A Munguía-Rosas

    Full Text Available Studies of the effects of patch size and isolation on plant species density have yielded contrasting results. However, much of the available evidence comes from relatively recent anthropogenic forest fragments which have not reached equilibrium between extinction and immigration. This is a critical issue because the theory clearly states that only when equilibrium has been reached can the number of species be accurately predicted by habitat size and isolation. Therefore, species density could be better predicted by patch size and isolation in an ecosystem that has been fragmented for a very long time. We tested whether patch area, isolation and other spatial variables explain variation among forest patches in plant species density in an ecosystem where the forest has been naturally fragmented for long periods of time on a geological scale. Our main predictions were that plant species density will be positively correlated with patch size, and negatively correlated with isolation (distance to the nearest patch, connectivity, and distance to the continuous forest. We surveyed the vascular flora (except lianas and epiphytes of 19 forest patches using five belt transects (50×4 m each per patch (area sampled per patch = 0.1 ha. As predicted, plant species density was positively associated (logarithmically with patch size and negatively associated (linearly with patch isolation (distance to the nearest patch. Other spatial variables such as patch elevation and perimeter, did not explain among-patch variability in plant species density. The power of patch area and isolation as predictors of plant species density was moderate (together they explain 43% of the variation, however, a larger sample size may improve the explanatory power of these variables. Patch size and isolation may be suitable predictors of long-term plant species density in terrestrial ecosystems that are naturally and anthropogenically fragmented.

  20. Vulnerability of shallow ground water and drinking-water wells to nitrate in the United States: Model of predicted nitrate concentration in shallow, recently recharged ground water -- Input data set for population density (gwava-s_popd)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set represents 1990 block group population density, in people per square kilometer, in the conterminous United States. The data set was used as an input...

  1. A generalized model for estimating the energy density of invertebrates

    Science.gov (United States)

    James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.

    2012-01-01

    Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2  =  0.96, p calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.

  2. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  3. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  4. Strain energy density gradients in bone marrow predict osteoblast and osteoclast activity: a finite element study.

    Science.gov (United States)

    Webster, Duncan; Schulte, Friederike A; Lambers, Floor M; Kuhn, Gisela; Müller, Ralph

    2015-03-18

    Huiskes et al. hypothesized that mechanical strains sensed by osteocytes residing in trabecular bone dictate the magnitude of load-induced bone formation. More recently, the mechanical environment in bone marrow has also been implicated in bone׳s response to mechanical stimulation. In this study, we hypothesize that trabecular load-induced bone formation can be predicted by mechanical signals derived from an integrative µFE model, incorporating a description of both the bone and marrow phase. Using the mouse tail loading model in combination with in vivo micro-computed tomography (µCT) we tracked load induced changes in the sixth caudal vertebrae of C57BL/6 mice to quantify the amount of newly mineralized and eroded bone volumes. To identify the mechanical signals responsible for adaptation, local morphometric changes were compared to micro-finite element (µFE) models of vertebrae prior to loading. The mechanical parameters calculated were strain energy density (SED) on trabeculae at bone forming and resorbing surfaces, SED in the marrow at the boundary between bone forming and resorbing surfaces, along with SED in the trabecular bone and marrow volumes. The gradients of each parameter were also calculated. Simple regression analysis showed mean SED gradients in the trabecular bone matrix to significantly correlate with newly mineralized and eroded bone volumes R(2)=0.57 and 0.41, respectively, pbone marrow plays a significant role in determining osteoblast and osteoclast activity.

  5. Social Inclusion Predicts Lower Blood Glucose and Low-Density Lipoproteins in Healthy Adults.

    Science.gov (United States)

    Floyd, Kory; Veksler, Alice E; McEwan, Bree; Hesse, Colin; Boren, Justin P; Dinsmore, Dana R; Pavlich, Corey A

    2016-07-27

    Loneliness has been shown to have direct effects on one's personal well-being. Specifically, a greater feeling of loneliness is associated with negative mental health outcomes, negative health behaviors, and an increased likelihood of premature mortality. Using the neuroendocrine hypothesis, we expected social inclusion to predict decreases in both blood glucose levels and low-density lipoproteins (LDLs) and increases in high-density lipoproteins (HDLs). Fifty-two healthy adults provided self-report data for social inclusion and blood samples for hematological tests. Results indicated that higher social inclusion predicted lower levels of blood glucose and LDL, but had no effect on HDL. Implications for theory and practice are discussed.

  6. Predicting ease of perinephric fat dissection at time of open partial nephrectomy using preoperative fat density characteristics.

    Science.gov (United States)

    Zheng, Yin; Espiritu, Patrick; Hakky, Tariq; Jutras, Kristin; Spiess, Philippe E

    2014-12-01

    To predict the ease of perinephric fat surgical dissection at the time of open partial nephrectomy (OPN) using perinepheric fat density characteristics as measured on preoperative computed tomography (CT). In all, 41 consecutive OPN patients with available preoperative imaging and prospectively collected dissection difficulty assessment were identified. Using a scoring system that was adopted for the purposes of this study, the genitourinary surgeon quantified the difficulty of the perinephric fat dissection on the surface of the renal capsule at the time of surgery. On axial CT slice centred on the renal hilum, we measured the quantity and density of perinephric fat whose absorption coefficient was between -190 to -30 Hounsfield units. Correlation between perinephric fat surface density (PnFSD) as noted on preoperative imaging and as observed by the surgeon at time of surgery were correlated in a completely 'double-blinded' fashion. Density comparisons between fat dissection difficulties were made using an anova. Associations between covariates and perinephric fat density were evaluated by univariate and multivariate logistic regression analyses. Receiver-operating characteristic (ROC) curves for six different predictive models were created to visualise the predictive enhancement of PnFSD. PnFSD was positively correlated with total surgical duration (Pearson's correlation coefficient 0.314, P = 0.04). PnFSD significantly correlated with gender (P = 0.001) and difficulty of perinephric fat surgical dissection (P dissection that was not difficult (n = 19) was 5598.32 (1367.77) surface density pixel unit (SDPU), and for a difficult dissection (n = 22) was 10272.23 (3804.67) SDPU. Univariate analysis showed gender (P = 0.002), and PnFSD were predictive of the presence of 'sticky' perinephric fat. A multivariate analysis model showed that PnFSD was the only variable that remained an independent predictor of perinephric fat dissection difficulty (P = 0.01). Of the six

  7. A Creep Model for High Density Snow

    Science.gov (United States)

    2017-04-01

    Director of ERDC-CRREL was Dr. Lance Hansen, and the Director was Dr. Robert E. Davis. COL Bryan S. Green was Commander of ERDC, and Dr. David W...Station, Green - land, and that will be founded on a compacted snow surface. The defor- mation of snow under a constant load (creep deformation, or...developed in this study are enough similar to the generalized creep model used in the ABAQUS finite element software that the ABAQUS creep model was used

  8. Prediction of moisture content of alfalfa using density-independent functions of microwave dielectric properties

    Science.gov (United States)

    Shrestha, Bijay L.; Wood, Hugh C.; Sokhansanj, Shahab

    2005-05-01

    The use of density-independent functions of the dielectric properties of chopped alfalfa, calculated from microwave reflection coefficients from 300 MHz to 18 GHz, was studied for determining moisture content in the range from 12% to 73%, wet basis, at bulk densities from 0.139 to 0.716 g cm-3 at 20 °C. Prediction of moisture content with worst-case relative errors of about 3% or less over the range from 20% to 73% confirmed promising prospects for use of such density-independent functions for reliable moisture measurement for important plant materials.

  9. High relative density of lymphatic vessels predicts poor survival in tongue squamous cell carcinoma.

    Science.gov (United States)

    Seppälä, Miia; Pohjola, Konsta; Laranne, Jussi; Rautiainen, Markus; Huhtala, Heini; Renkonen, Risto; Lemström, Karl; Paavonen, Timo; Toppila-Salmi, Sanna

    2016-12-01

    Tongue cancer has a poor prognosis due to its early metastasis via lymphatic vessels. The present study aimed at evaluating lymphatic vessel density, relative density of lymphatic vessel, and diameter of lymphatic vessels and its predictive role in tongue cancer. Paraffin-embedded tongue and lymph node specimens (n = 113) were stained immunohistochemically with a polyclonal antibody von Willebrand factor, recognizing blood and lymphatic endothelium and with a monoclonal antibody podoplanin, recognizing lymphatic endothelium. The relative density of lymphatic vessels was counted by dividing the mean number of lymphatic vessels per microscopic field (podoplanin) by the mean number of all vessels (vWf) per microscopic field. The high relative density of lymphatic vessels (≥80 %) was associated with poor prognosis in tongue cancer. The relative density of lymphatic vessels predicted poor prognosis in the group of primary tumor size T1-T2 and in the group of non-metastatic cancer. The lymphatic vessel density and diameter of lymphatic vessels were not associated with tongue cancer survival. The relative density of lymphatic vessels might have clinically relevant prognostic impact. Further studies with increased number of patients are needed.

  10. Modeling of branching density and branching distribution in low-density polyethylene polymerization

    NARCIS (Netherlands)

    Kim, D.M.; Iedema, P.D.

    2008-01-01

    Low-density polyethylene (ldPE) is a general purpose polymer with various applications. By this reason, many publications can be found on the ldPE polymerization modeling. However, scission reaction and branching distribution are only recently considered in the modeling studies due to difficulties i

  11. Effect of isomeric structures of branched cyclic hydrocarbons on densities and equation of state predictions at elevated temperatures and pressures.

    Science.gov (United States)

    Wu, Yue; Bamgbade, Babatunde A; Burgess, Ward A; Tapriyal, Deepak; Baled, Hseen O; Enick, Robert M; McHugh, Mark A

    2013-07-25

    The cis and trans conformation of a branched cyclic hydrocarbon affects the packing and, hence, the density, exhibited by that compound. Reported here are density data for branched cyclohexane (C6) compounds including methylcyclohexane, ethylcyclohexane (ethylcC6), cis-1,2-dimethylcyclohexane (cis-1,2), cis-1,4-dimethylcyclohexane (cis-1,4), and trans-1,4-dimethylcyclohexane (trans-1,4) determined at temperatures up to 525 K and pressures up to 275 MPa. Of the four branched C6 isomers, cis-1,2 exhibits the largest densities and the smallest densities are exhibited by trans-1,4. The densities are modeled with the Peng-Robinson (PR) equation of state (EoS), the high-temperature, high-pressure, volume-translated (HTHP VT) PREoS, and the perturbed chain, statistical associating fluid theory (PC-SAFT) EoS. Model calculations highlight the capability of these equations to account for the different densities observed for the four isomers investigated in this study. The HTHP VT-PREoS provides modest improvements over the PREoS, but neither cubic EoS is capable of accounting for the effect of isomer structural differences on the observed densities. The PC-SAFT EoS, with pure component parameters from the literature or from a group contribution method, provides improved density predictions relative to those obtained with the PREoS or HTHP VT-PREoS. However, the PC-SAFT EoS, with either set of parameters, also cannot fully account for the effect of the C6 isomer structure on the resultant density.

  12. Effect of Isomeric Structures of Branched Cyclic Hydrocarbons on Densities and Equation of State Predictions at Elevated Temperatures and Pressures

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yue; Bamgbade, Babatunde A; Burgess, Ward A; Tapriyal, Deepak; Baled, Hseen O; Enick, Robert M; McHugh, Mark

    2013-07-25

    The cis and trans conformation of a branched cyclic hydrocarbon affects the packing and, hence, the density, exhibited by that compound. Reported here are density data for branched cyclohexane (C6) compounds including methylcyclohexane, ethylcyclohexane (ethylcC6), cis-1,2-dimethylcyclohexane (cis-1,2), cis-1,4-dimethylcyclohexane (cis-1,4), and trans-1,4-dimethylcyclohexane (trans-1,4) determined at temperatures up to 525 K and pressures up to 275 MPa. Of the four branched C6 isomers, cis-1,2 exhibits the largest densities and the smallest densities are exhibited by trans-1,4. The densities are modeled with the Peng–Robinson (PR) equation of state (EoS), the high-temperature, high-pressure, volume-translated (HTHP VT) PREoS, and the perturbed chain, statistical associating fluid theory (PC-SAFT) EoS. Model calculations highlight the capability of these equations to account for the different densities observed for the four isomers investigated in this study. The HTHP VT-PREoS provides modest improvements over the PREoS, but neither cubic EoS is capable of accounting for the effect of isomer structural differences on the observed densities. The PC-SAFT EoS, with pure component parameters from the literature or from a group contribution method, provides improved density predictions relative to those obtained with the PREoS or HTHP VT-PREoS. However, the PC-SAFT EoS, with either set of parameters, also cannot fully account for the effect of the C6 isomer structure on the resultant density.

  13. Exact Maps in Density Functional Theory for Lattice Models

    CERN Document Server

    Dimitrov, Tanja; Fuks, Johanna I; Rubio, Angel

    2015-01-01

    In the present work, we employ exact diagonalization for model systems on a real-space lattice to explicitly construct the exact density-to-potential and for the first time the exact density-to-wavefunction map that underly the Hohenberg-Kohn theorem in density functional theory. Having the explicit wavefunction-to- density map at hand, we are able to construct arbitrary observables as functionals of the ground-state density. We analyze the density-to-potential map as the distance between the fragments of a system increases and the correlation in the system grows. We observe a feature that gradually develops in the density-to-potential map as well as in the density-to-wavefunction map. This feature is inherited by arbitrary expectation values as functional of the ground-state density. We explicitly show the excited-state energies, the excited-state densities, and the correlation entropy as functionals of the ground-state density. All of them show this exact feature that sharpens as the coupling of the fragmen...

  14. Current Density and Continuity in Discretized Models

    Science.gov (United States)

    Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard

    2010-01-01

    Discrete approaches have long been used in numerical modelling of physical systems in both research and teaching. Discrete versions of the Schrodinger equation employing either one or several basis functions per mesh point are often used by senior undergraduates and beginning graduate students in computational physics projects. In studying…

  15. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  16. Spectral density method to Anderson-Holstein model

    Science.gov (United States)

    Chebrolu, Narasimha Raju; Chatterjee, Ashok

    2015-06-01

    Two-parameter spectral density function of a magnetic impurity electron in a non-magnetic metal is calculated within the framework of the Anderson-Holstein model using the spectral density approximation method. The effect of electron-phonon interaction on the spectral function is investigated.

  17. Physics-Informed Machine Learning for Predictive Turbulence Modeling: A Priori Assessment of Prediction Confidence

    CERN Document Server

    Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-01-01

    Although Reynolds-Averaged Navier-Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. Recently, a data-driven turbulence modeling approach via machine learning has been proposed to predict the Reynolds stress anisotropy of a given flow based on high fidelity data from closely related flows. In this work, the closeness of different flows is investigated to assess the prediction confidence a priori. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique...

  18. Reliable Estimation of Prediction Uncertainty for Physicochemical Property Models.

    Science.gov (United States)

    Proppe, Jonny; Reiher, Markus

    2017-07-11

    One of the major challenges in computational science is to determine the uncertainty of a virtual measurement, that is the prediction of an observable based on calculations. As highly accurate first-principles calculations are in general unfeasible for most physical systems, one usually resorts to parameteric property models of observables, which require calibration by incorporating reference data. The resulting predictions and their uncertainties are sensitive to systematic errors such as inconsistent reference data, parametric model assumptions, or inadequate computational methods. Here, we discuss the calibration of property models in the light of bootstrapping, a sampling method that can be employed for identifying systematic errors and for reliable estimation of the prediction uncertainty. We apply bootstrapping to assess a linear property model linking the (57)Fe Mössbauer isomer shift to the contact electron density at the iron nucleus for a diverse set of 44 molecular iron compounds. The contact electron density is calculated with 12 density functionals across Jacob's ladder (PWLDA, BP86, BLYP, PW91, PBE, M06-L, TPSS, B3LYP, B3PW91, PBE0, M06, TPSSh). We provide systematic-error diagnostics and reliable, locally resolved uncertainties for isomer-shift predictions. Pure and hybrid density functionals yield average prediction uncertainties of 0.06-0.08 mm s(-1) and 0.04-0.05 mm s(-1), respectively, the latter being close to the average experimental uncertainty of 0.02 mm s(-1). Furthermore, we show that both model parameters and prediction uncertainty depend significantly on the composition and number of reference data points. Accordingly, we suggest that rankings of density functionals based on performance measures (e.g., the squared coefficient of correlation, r(2), or the root-mean-square error, RMSE) should not be inferred from a single data set. This study presents the first statistically rigorous calibration analysis for theoretical M

  19. Evaluation of char combustion models: measurement and analysis of variability in char particle size and density

    Energy Technology Data Exchange (ETDEWEB)

    Maloney, Daniel J; Monazam, Esmail R; Casleton, Kent H; Shaddix, Christopher R

    2008-08-01

    Char samples representing a range of combustion conditions and extents of burnout were obtained from a well-characterized laminar flow combustion experiment. Individual particles from the parent coal and char samples were characterized to determine distributions in particle volume, mass, and density at different extent of burnout. The data were then compared with predictions from a comprehensive char combustion model referred to as the char burnout kinetics model (CBK). The data clearly reflect the particle- to-particle heterogeneity of the parent coal and show a significant broadening in the size and density distributions of the chars resulting from both devolatilization and combustion. Data for chars prepared in a lower oxygen content environment (6% oxygen by vol.) are consistent with zone II type combustion behavior where most of the combustion is occurring near the particle surface. At higher oxygen contents (12% by vol.), the data show indications of more burning occurring in the particle interior. The CBK model does a good job of predicting the general nature of the development of size and density distributions during burning but the input distribution of particle size and density is critical to obtaining good predictions. A significant reduction in particle size was observed to occur as a result of devolatilization. For comprehensive combustion models to provide accurate predictions, this size reduction phenomenon needs to be included in devolatilization models so that representative char distributions are carried through the calculations.

  20. Statistical characteristics of irreversible predictability time in regional ocean models

    Directory of Open Access Journals (Sweden)

    P. C. Chu

    2005-01-01

    Full Text Available Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF of the irreversible predictability time (IPT (called τ-PDF computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks and intermediate (up to two months predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain are not outliers and share the same statistics as a whole ensemble of predictions.

  1. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    Science.gov (United States)

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  2. Low bone mineral density in noncholestatic liver cirrhosis: prevalence, severity and prediction

    Directory of Open Access Journals (Sweden)

    Figueiredo Fátima Aparecida Ferreira

    2003-01-01

    Full Text Available BACKGROUND: Metabolic bone disease has long been associated with cholestatic disorders. However, data in noncholestatic cirrhosis are relatively scant. AIMS: To determine prevalence and severity of low bone mineral density in noncholestatic cirrhosis and to investigate whether age, gender, etiology, severity of underlying liver disease, and/or laboratory tests are predictive of the diagnosis. PATIENTS/METHODS: Between March and September/1998, 89 patients with noncholestatic cirrhosis and 20 healthy controls were enrolled in a cross-sectional study. All subjects underwent standard laboratory tests and bone densitometry at lumbar spine and femoral neck by dual X-ray absorptiometry. RESULTS: Bone mass was significantly reduced at both sites in patients compared to controls. The prevalence of low bone mineral density in noncholestatic cirrhosis, defined by the World Health Organization criteria, was 78% at lumbar spine and 71% at femoral neck. Bone density significantly decreased with age at both sites, especially in patients older than 50 years. Bone density was significantly lower in post-menopausal women patients compared to pre-menopausal and men at both sites. There was no significant difference in bone mineral density among noncholestatic etiologies. Lumbar spine bone density significantly decreased with the progression of liver dysfunction. No biochemical variable was significantly associated with low bone mineral density. CONCLUSIONS: Low bone mineral density is highly prevalent in patients with noncholestatic cirrhosis. Older patients, post-menopausal women and patients with severe hepatic dysfunction experienced more advanced bone disease. The laboratory tests routinely determined in patients with liver disease did not reliably predict low bone mineral density.

  3. Optimal cytoplasmatic density and flux balance model under macromolecular crowding effects.

    Science.gov (United States)

    Vazquez, Alexei

    2010-05-21

    Macromolecules occupy between 34% and 44% of the cell cytoplasm, about half the maximum packing density of spheres in three dimension. Yet, there is no clear understanding of what is special about this value. To address this fundamental question we investigate the effect of macromolecular crowding on cell metabolism. We develop a cell scale flux balance model capturing the main features of cell metabolism at different nutrient uptakes and macromolecular densities. Using this model we show there are two metabolic regimes at low and high nutrient uptakes. The latter regime is characterized by an optimal cytoplasmatic density where the increase of reaction rates by confinement and the decrease by diffusion slow-down balance. More important, the predicted optimal density is in the range of the experimentally determined density of Escherichia coli.

  4. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

    of derivative contracts. Risk-neutral densities, obtained from panels of crude-oil option prices, are adjusted to reflect real-world risks using either a parametric or a non-parametric calibration approach. The relative performance of the models is evaluated for the entire support of the density, as well......  The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  5. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Høg, Esben; Tsiaras, Leonidas

    2011-01-01

    of derivative contracts. Risk-neutral densities, obtained from panels of crude-oil option prices, are adjusted to reflect real-world risks using either a parametric or a non-parametric calibration approach. The relative performance of the models is evaluated for the entire support of the density, as well......The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994–2006 period. Moving beyond standard ARCH type models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... as for regions and intervals that are of special interest for the economic agent. We find that non-parametric adjustments of risk-neutral density forecasts perform significantly better than their parametric counterparts. Goodness-of-fit tests and out-of-sample likelihood comparisons favor forecast densities...

  6. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  7. Prediction of lung density changes after radiotherapy by cone beam computed tomography response markers and pre-treatment factors for non-small cell lung cancer patients

    DEFF Research Database (Denmark)

    Bernchou, Uffe; Hansen, Olfred; Schytte, Tine;

    2015-01-01

    BACKGROUND AND PURPOSE: This study investigates the ability of pre-treatment factors and response markers extracted from standard cone-beam computed tomography (CBCT) images to predict the lung density changes induced by radiotherapy for non-small cell lung cancer (NSCLC) patients. METHODS...... AND MATERIALS: Density changes in follow-up computed tomography scans were evaluated for 135 NSCLC patients treated with radiotherapy. Early response markers were obtained by analysing changes in lung density in CBCT images acquired during the treatment course. The ability of pre-treatment factors and CBCT...... markers to predict lung density changes induced by radiotherapy was investigated. RESULTS: Age and CBCT markers extracted at 10th, 20th, and 30th treatment fraction significantly predicted lung density changes in a multivariable analysis, and a set of response models based on these parameters were...

  8. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  9. Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion

    Science.gov (United States)

    2016-07-20

    PDF methods have proven useful in modelling turbulent combustion, primarily because convection and complex reactions can be treated without the need...modelled transport equation fir the joint PDF of velocity, turbulent frequency and composition (species mass fractions and enthalpy ). The advantages of...PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical

  10. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  11. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Midrapidity inclusive densities in high energy pp collisions in additive quark model

    Science.gov (United States)

    Shabelski, Yu. M.; Shuvaev, A. G.

    2016-08-01

    High energy (CERN SPS and LHC) inelastic pp (pbar{p}) scattering is treated in the framework of the additive quark model together with Pomeron exchange theory. We extract the midrapidity inclusive density of the charged secondaries produced in a single quark-quark collision and investigate its energy dependence. Predictions for the π p collisions are presented.

  4. Densities of Pure Ionic Liquids and Mixtures: Modeling and Data Analysis

    DEFF Research Database (Denmark)

    Abildskov, Jens; O’Connell, John P.

    2015-01-01

    Our two-parameter corresponding states model for liquid densities and compressibilities has been extended to more pure ionic liquids and to their mixtures with one or two solvents. A total of 19 new group contributions (5 new cations and 14 new anions) have been obtained for predicting pressure...

  5. Spatially explicit modeling of lesser prairie-chicken lek density in Texas

    Science.gov (United States)

    Timmer, Jennifer M.; Butler, M.J.; Ballard, Warren; Boal, Clint W.; Whitlaw, H.A.

    2014-01-01

    As with many other grassland birds, lesser prairie-chickens (Tympanuchus pallidicinctus) have experienced population declines in the Southern Great Plains. Currently they are proposed for federal protection under the Endangered Species Act. In addition to a history of land-uses that have resulted in habitat loss, lesser prairie-chickens now face a new potential disturbance from energy development. We estimated lek density in the occupied lesser prairie-chicken range of Texas, USA, and modeled anthropogenic and vegetative landscape features associated with lek density. We used an aerial line-transect survey method to count lesser prairie-chicken leks in spring 2010 and 2011 and surveyed 208 randomly selected 51.84-km(2) blocks. We divided each survey block into 12.96-km(2) quadrats and summarized landscape variables within each quadrat. We then used hierarchical distance-sampling models to examine the relationship between lek density and anthropogenic and vegetative landscape features and predict how lek density may change in response to changes on the landscape, such as an increase in energy development. Our best models indicated lek density was related to percent grassland, region (i.e., the northeast or southwest region of the Texas Panhandle), total percentage of grassland and shrubland, paved road density, and active oil and gas well density. Predicted lek density peaked at 0.39leks/12.96km(2) (SE=0.09) and 2.05leks/12.96km(2) (SE=0.56) in the northeast and southwest region of the Texas Panhandle, respectively, which corresponds to approximately 88% and 44% grassland in the northeast and southwest region. Lek density increased with an increase in total percentage of grassland and shrubland and was greatest in areas with lower densities of paved roads and lower densities of active oil and gas wells. We used the 2 most competitive models to predict lek abundance and estimated 236 leks (CV=0.138, 95% CI=177-306leks) for our sampling area. Our results suggest that

  6. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  7. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  8. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  9. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  10. A density functional theory based approach for predicting melting points of ionic liquids.

    Science.gov (United States)

    Chen, Lihua; Bryantsev, Vyacheslav S

    2017-02-01

    Accurate prediction of melting points of ILs is important both from the fundamental point of view and from the practical perspective for screening ILs with low melting points and broadening their utilization in a wider temperature range. In this work, we present an ab initio approach to calculate melting points of ILs with known crystal structures and illustrate its application for a series of 11 ILs containing imidazolium/pyrrolidinium cations and halide/polyatomic fluoro-containing anions. The melting point is determined as a temperature at which the Gibbs free energy of fusion is zero. The Gibbs free energy of fusion can be expressed through the use of the Born-Fajans-Haber cycle via the lattice free energy of forming a solid IL from gaseous phase ions and the sum of the solvation free energies of ions comprising IL. Dispersion-corrected density functional theory (DFT) involving (semi)local (PBE-D3) and hybrid exchange-correlation (HSE06-D3) functionals is applied to estimate the lattice enthalpy, entropy, and free energy. The ions solvation free energies are calculated with the SMD-generic-IL solvation model at the M06-2X/6-31+G(d) level of theory under standard conditions. The melting points of ILs computed with the HSE06-D3 functional are in good agreement with the experimental data, with a mean absolute error of 30.5 K and a mean relative error of 8.5%. The model is capable of accurately reproducing the trends in melting points upon variation of alkyl substituents in organic cations and replacement one anion by another. The results verify that the lattice energies of ILs containing polyatomic fluoro-containing anions can be approximated reasonably well using the volume-based thermodynamic approach. However, there is no correlation of the computed lattice energies with molecular volume for ILs containing halide anions. Moreover, entropies of solid ILs follow two different linear relationships with molecular volume for halides and polyatomic fluoro

  11. Quark Matter at High Density based on Extended Confined-isospin-density-dependent-mass Model

    CERN Document Server

    Qauli, A I

    2016-01-01

    We investigate the effect of the inclusion of relativistic Coulomb terms in a confined-isospin-density-dependent-mass (CIDDM) model of strange quark matter (SQM). We found that if we include Coulomb term in scalar density form, SQM equation of state (EOS) at high densities is stiffer but if we include Coulomb term in vector density form is softer than that of standard CIDDM model. We also investigate systematically the role of each term of the extended CIDDM model. Compared with what was reported in Ref.~\\cite {ref:isospin}, we found the stiffness of SQM EOS is controlled by the interplay among the the oscillator harmonic, isospin asymmetry and Coulomb contributions depending on the parameter's range of these terms. We have found that the absolute stable condition of SQM and the mass of 2 $M_\\odot$ pulsars can constrain the parameter of oscillator harmonic $\\kappa_1$ $\\approx 0.53$ in the case Coulomb term excluded. If the Coulomb term is included, for the models with their parameters are consistent with SQM ...

  12. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  13. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  14. Exact maps in density functional theory for lattice models

    Science.gov (United States)

    Dimitrov, Tanja; Appel, Heiko; Fuks, Johanna I.; Rubio, Angel

    2016-08-01

    In the present work, we employ exact diagonalization for model systems on a real-space lattice to explicitly construct the exact density-to-potential and graphically illustrate the complete exact density-to-wavefunction map that underly the Hohenberg-Kohn theorem in density functional theory. Having the explicit wavefunction-to-density map at hand, we are able to construct arbitrary observables as functionals of the ground-state density. We analyze the density-to-potential map as the distance between the fragments of a system increases and the correlation in the system grows. We observe a feature that gradually develops in the density-to-potential map as well as in the density-to-wavefunction map. This feature is inherited by arbitrary expectation values as functional of the ground-state density. We explicitly show the excited-state energies, the excited-state densities, and the correlation entropy as functionals of the ground-state density. All of them show this exact feature that sharpens as the coupling of the fragments decreases and the correlation grows. We denominate this feature as intra-system steepening and discuss how it relates to the well-known inter-system derivative discontinuity. The inter-system derivative discontinuity is an exact concept for coupled subsystems with degenerate ground state. However, the coupling between subsystems as in charge transfer processes can lift the degeneracy. An important conclusion is that for such systems with a near-degenerate ground state, the corresponding cut along the particle number N of the exact density functionals is differentiable with a well-defined gradient near integer particle number.

  15. Prediction models of prevalent radiographic vertebral fractures among older men.

    Science.gov (United States)

    Schousboe, John T; Rosen, Harold R; Vokes, Tamara J; Cauley, Jane A; Cummings, Steven R; Nevitt, Michael C; Black, Dennis M; Orwoll, Eric S; Kado, Deborah M; Ensrud, Kristine E

    2014-01-01

    No studies have compared how well different prediction models discriminate older men who have a radiographic prevalent vertebral fracture (PVFx) from those who do not. We used area under receiver operating characteristic curves and a net reclassification index to compare how well regression-derived prediction models and nonregression prediction tools identify PVFx among men age ≥65 yr with femoral neck T-score of -1.0 or less enrolled in the Osteoporotic Fractures in Men Study. The area under receiver operating characteristic for a model with age, bone mineral density, and historical height loss (HHL) was 0.682 compared with 0.692 for a complex model with age, bone mineral density, HHL, prior non-spine fracture, body mass index, back pain, grip strength, smoking, and glucocorticoid use (p values for difference in 5 bootstrapped samples 0.14-0.92). This complex model, using a cutpoint prevalence of 5%, correctly reclassified only a net 5.7% (p = 0.13) of men as having or not having a PVFx compared with a simple criteria list (age ≥ 80 yr, HHL >4 cm, or glucocorticoid use). In conclusion, simple criteria identify older men with PVFx and regression-based models. Future research to identify additional risk factors that more accurately identify older men with PVFx is needed.

  16. Evaluación de Modelos de Predicción de Densidades Líquidas de Saturación de Aldehídos, Cetonas y Alcoholes Evaluation of Predictive Models for the Liquid Saturation Density of Aldehydes, Ketones, and Alcohols

    Directory of Open Access Journals (Sweden)

    Ángel Mulero

    2006-01-01

    Full Text Available Se ha diseñado un programa de ordenador que permite estudiar la validez de expresiones empíricas para predecir la densidad de líquidos saturados puros. El programa incluye una base de datos con valores aceptados de literatura a fin de comparar los calculados con los modelos. Para cada dato, el programa indica la desviación absoluta y porcentual. Para cada fluido y para cada familia de fluidos se lista el intervalo de temperaturas y se obtienen desviaciones absolutas y porcentuales, indicando a qué temperatura se produce la máxima desviación. Además se generan gráficas que permiten visualizar los datos junto a las predicciones de los modelos. Se ha estudiado la validez y exactitud de ocho expresiones analíticas, basadas en el principio de estados correspondientes, para predecir la densidad líquida de saturación de aldehídos, cetonas y cuatro familias de alcoholes.A computer program to study the validity of empirical correlations for the prediction of saturation liquid densities of pure fluids has been designed. The program includes a database with accepted values from the literature in order to compare the values calculated with the models. For each datum point the program gives the absolute and relative deviations. For each fluid and family of fluids, the temperature range is listed, the absolute and relative deviations are obtained, and the temperature at which the highest deviation is found is indicated. Besides, graphics are generated to display the data together with the model predictions. The validity and accuracy of eight analytical expressions based on the corresponding state principle to predict the saturation liquid density of aldehydes, ketones, and four alcohol families have been tested .

  17. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  18. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  19. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  20. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  1. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    Accurate predictions of the soil-gas diffusivity (Dp/Do, where Dp is the soil-gas diffusion coefficient and Do is the diffusion coefficient in free air) from easily measureable parameters like air-filled porosity (ε) and soil total porosity (φ) are valuable when predicting soil aeration...... and the emission of greenhouse gases and gaseous-phase contaminants from soils. Soil type (texture) and soil density (compaction) are two key factors controlling gas diffusivity in soils. We extended a recently presented density-corrected Dp(ε)/Do model by letting both model parameters (α and β) be interdependent...... and also functions of φ. The extension was based on literature measurements on Dutch and Danish soils ranging from sand to peat. The parameter α showed a promising linear relation to total porosity, while β also varied with α following a weak linear relation. The thus generalized density-corrected (GDC...

  2. Probability density function modeling for sub-powered interconnects

    Science.gov (United States)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  3. Hidden Markov Models with Factored Gaussian Mixtures Densities

    Institute of Scientific and Technical Information of China (English)

    LI Hao-zheng; LIU Zhi-qiang; ZHU Xiang-hua

    2004-01-01

    We present a factorial representation of Gaussian mixture models for observation densities in Hidden Markov Models(HMMs), which uses the factorial learning in the HMM framework. We derive the reestimation formulas for estimating the factorized parameters by the Expectation Maximization (EM) algorithm. We conduct several experiments to compare the performance of this model structure with Factorial Hidden Markov Models(FHMMs) and HMMs, some conclusions and promising empirical results are presented.

  4. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  5. The Z3 model with the density of states method

    CERN Document Server

    Mercado, Ydalia Delgado; Gattringer, Christof

    2014-01-01

    In this contribution we apply a new variant of the density of states method to the Z3 spin model at finite density. We use restricted expectation values evaluated with Monte Carlo simulations and study their dependence on a control parameter lambda. We show that a sequence of one-parameter fits to the Monte-Carlo data as a function of lambda is sufficient to completely determine the density of states. We expect that this method has smaller statistical errors than other approaches since all generated Monte Carlo data are used in the determination of the density. We compare results for magnetization and susceptibility to a reference simulation in the dual representation of the Z3 spin model and find good agreement for a wide range of parameters.

  6. Nuclear Level Density: Shell Model vs Mean Field

    CERN Document Server

    Sen'kov, Roman

    2015-01-01

    The knowledge of the nuclear level density is necessary for understanding various reactions including those in the stellar environment. Usually the combinatorics of Fermi-gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from the conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally...

  7. Gutzwiller study of extended Hubbard models with fixed boson densities

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Takashi [Department of Information Sciences, Kanagawa University, 2946 Tsuchiya, Hiratsuka, Kanagawa 259-1293 (Japan)

    2011-12-15

    We studied all possible ground states, including supersolid (SS) phases and phase separations of hard-core- and soft-core-extended Bose-Hubbard models with fixed boson densities by using the Gutzwiller variational wave function and the linear programming method. We found that the phase diagram of the soft-core model depends strongly on its transfer integral. Furthermore, for a large transfer integral, we showed that an SS phase can be the ground state even below or at half filling against the phase separation. We also found that the density difference between nearest-neighbor sites, which indicates the density order of the SS phase, depends strongly on the boson density and transfer integral.

  8. The density wave in a new anisotropic continuum model

    Institute of Scientific and Technical Information of China (English)

    Ge Hong-Xia; Dai Shi-Qiang; Dong Li-Yun

    2008-01-01

    In this paper the new continuum traffic flow model proposed by Jiang et al is developed based on an improved car-following model,in which the speed gradient term replaces the density gradient term in the equation of motion.It overcomes the wrong-way travel which exists in many high-order continuum models.Based on the continuum version of car-following model,the condition for stable traffic flow is derived.Nonlinear analysis shows that the density fluctuation in traffic flow induces a variety of density waves.Near the onset of instability,a small disturbance could lead to solitons determined by the Korteweg-de-Vries (KdV) equation,and the soliton solution is derived.

  9. Optimizing finite element predictions of local subchondral bone structural stiffness using neural network-derived density-modulus relationships for proximal tibial subchondral cortical and trabecular bone.

    Science.gov (United States)

    Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D

    2017-01-01

    Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain. However, it is unclear what density-modulus equation(s) should be applied with subchondral cortical and subchondral trabecular bone when constructing finite element models of the tibia. Using a novel approach applying neural networks, optimization, and back-calculation against in situ experimental testing results, the objective of this study was to identify subchondral-specific equations that optimized finite element predictions of local structural stiffness at the proximal tibial subchondral surface. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using multiple density-modulus equations (93 total variations) then mapped to corresponding finite element models. For each variation, root mean squared error was calculated between finite element prediction and in situ measured stiffness at 47 indentation sites. Resulting errors were used to train an artificial neural network, which provided an unlimited number of model variations, with corresponding error, for predicting stiffness at the subchondral bone surface. Nelder-Mead optimization was used to identify optimum density-modulus equations for predicting stiffness. Finite element modeling predicted 81% of experimental stiffness variance (with 10.5% error) using optimized equations for subchondral cortical and trabecular bone differentiated with a 0.5g/cm(3) density. In comparison with published density-modulus relationships, optimized equations offered improved predictions of local subchondral structural stiffness. Further research is needed with anisotropy inclusion, a smaller voxel size and de-blurring algorithms to improve predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A Joint Density Function in the Renewal Risk Model

    Institute of Scientific and Technical Information of China (English)

    XU HUAI; TANG LING; Wang De-hui

    2013-01-01

    In this paper,we consider a general expression for (Φ)(u,x,y),the joint density function of the surplus prior to ruin and the deficit at ruin when the initial surplus is u.In the renewal risk model,this density function is expressed in terms of the corresponding density function when the initial surplus is 0.In the compound Poisson risk process with phase-type claim size,we derive an explicit expression for (Φ)(u,x,y).Finally,we give a numerical example to illustrate the application of these results.

  11. Modeling of plasma density in the earth's dayside inner magnetosphere

    NARCIS (Netherlands)

    Domrachev, VV; Chugunin, DV

    2002-01-01

    The results of comparison of the model profiles of density, obtained by means of the CDPDM model, with the experimental data of the ISEE-1 satellite for the years 1977-1983 are presented. The hypothesis on the validity of the mirror mapping of the convection boundary relative to the dawn-dusk

  12. Strange matter equation of state in the quark mass-density-dependent model

    Energy Technology Data Exchange (ETDEWEB)

    Benvenuto, O.G. (Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N, 1900 La Plata (Argentina)); Lugones, G. (Departamento de Fisica, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1900 La Plata (Argentina))

    1995-02-15

    We study the properties and stability of strange matter at [ital T]=0 in the quark mass-density-dependent model for noninteracting quarks. We found a wide stability window'' for the values of the parameters ([ital C],[ital M][sub [ital s]0]) and the resulting equation of state at low densities is stiffer than that of the MIT bag model. At high densities it tends to the ultrarelativistic behavior expected because of the asymptotic freedom of quarks. The density of zero pressure is near the one predicted by the bag model and [ital not] shifted away as stated before; nevertheless, at these densities the velocity of sound is [approx]50% larger in this model than in the bag model. We have integrated the equations of stellar structure for strange stars with the present equation of state. We found that the mass-radius relation is very much the same as in the bag model, although it extends to more massive objects, due to the stiffening of the equation of state at low densities.

  13. Probabilistic Modeling of Fatigue Damage Accumulation for Reliability Prediction

    Directory of Open Access Journals (Sweden)

    Vijay Rathod

    2011-01-01

    Full Text Available A methodology for probabilistic modeling of fatigue damage accumulation for single stress level and multistress level loading is proposed in this paper. The methodology uses linear damage accumulation model of Palmgren-Miner, a probabilistic S-N curve, and an approach for a one-to-one transformation of probability density functions to achieve the objective. The damage accumulation is modeled as a nonstationary process as both the expected damage accumulation and its variability change with time. The proposed methodology is then used for reliability prediction under single stress level and multistress level loading, utilizing dynamic statistical model of cumulative fatigue damage. The reliability prediction under both types of loading is demonstrated with examples.

  14. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  15. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  16. Modification of free-energy density functional theory approach for prediction of high-pressure mixture adsorption

    Institute of Scientific and Technical Information of China (English)

    LIU ShuYan; YANG XiaoNing; YANG Zhen

    2008-01-01

    A modified non-local free energy density functional theory (NDFT) model, with the consideration of the nonadditivity term of solid-fluid and fluid-fluid interactions and finite pore wall thickness (≈2 layers), was developed to model the confined fluid mixtures in slit pore. This improved NDFT approach, com-bining with the pore size distribution (PSD) analysis of adsorbent material can be applied to predicting the adsorption equilibria of high-pressure gas mixtures on activated carbon. Compared with the con-ventional NDFT method, this new approach partly improves the correlation performance of adsorption equilibrium for pure species and increases the reliability of the PSD analysis. For the mixtures, CH4/N2 and CO2/N2, a relatively improved performance has been observed for the adsorption equilibrium pre-diction of the mixtures under high-pressure conditions, especially for the weakly adsorbed species.

  17. Use of prediction methods to estimate true density of active pharmaceutical ingredients.

    Science.gov (United States)

    Cao, Xiaoping; Leyva, Norma; Anderson, Stephen R; Hancock, Bruno C

    2008-05-01

    True density is a fundamental and important property of active pharmaceutical ingredients (APIs). Using prediction methods to estimate the API true density can be very beneficial in pharmaceutical research and development, especially when experimental measurements cannot be made due to lack of material or sample handling restrictions. In this paper, two empirical prediction methods developed by Girolami and Immirzi and Perini were used to estimate the true density of APIs, and the estimation results were compared with experimentally measured values by helium pycnometry. The Girolami method is simple and can be used for both liquids and solids. For the tested APIs, the Girolami method had a maximum error of -12.7% and an average percent error of -3.0% with a 95% CI of (-3.8, -2.3%). The Immirzi and Perini method is more involved and is mainly used for solid crystals. In general, it gives better predictions than the Girolami method. For the tested APIs, the Immirzi and Perini method had a maximum error of 9.6% and an average percent error of 0.9% with a 95% CI of (0.3, 1.6%).

  18. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  19. Universal iso-density polarizable continuum model for molecular solvents

    CERN Document Server

    Gunceler, Deniz

    2014-01-01

    Implicit electron-density solvation models based on joint density-functional theory offer a computationally efficient solution to the problem of calculating thermodynamic quantities of solvated systems from first-principles quantum mechanics. However, despite much recent interest in such models, to date the applicability of such models to non-aqueous solvents has been limited because the determination of the model parameters requires fitting to a large database of experimental solvation energies for each new solvent considered. This work presents an alternate approach which allows development of new solvation models for a large class of protic and aprotic solvents from only simple, single-molecule ab initio calculations and readily available bulk thermodynamic data. We find that this model is accurate to nearly 1.7 kcal/mol even for solvents outside our development set.

  20. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  1. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  2. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  3. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  4. Radiomic modeling of BI-RADS density categories

    Science.gov (United States)

    Wei, Jun; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Hadjiiski, Lubomir

    2017-03-01

    Screening mammography is the most effective and low-cost method to date for early cancer detection. Mammographic breast density has been shown to be highly correlated with breast cancer risk. We are developing a radiomic model for BI-RADS density categorization on digital mammography (FFDM) with a supervised machine learning approach. With IRB approval, we retrospectively collected 478 FFDMs from 478 women. As a gold standard, breast density was assessed by an MQSA radiologist based on BI-RADS categories. The raw FFDMs were used for computerized density assessment. The raw FFDM first underwent log-transform to approximate the x-ray sensitometric response, followed by multiscale processing to enhance the fibroglandular densities and parenchymal patterns. Three ROIs were automatically identified based on the keypoint distribution, where the keypoints were obtained as the extrema in the image Gaussian scale-space. A total of 73 features, including intensity and texture features that describe the density and the parenchymal pattern, were extracted from each breast. Our BI-RADS density estimator was constructed by using a random forest classifier. We used a 10-fold cross validation resampling approach to estimate the errors. With the random forest classifier, computerized density categories for 412 of the 478 cases agree with radiologist's assessment (weighted kappa = 0.93). The machine learning method with radiomic features as predictors demonstrated a high accuracy in classifying FFDMs into BI-RADS density categories. Further work is underway to improve our system performance as well as to perform an independent testing using a large unseen FFDM set.

  5. Density functionals and dimensional renormalization for an exactly solvable model

    Science.gov (United States)

    Kais, S.; Herschbach, D. R.; Handy, N. C.; Murray, C. W.; Laming, G. J.

    1993-07-01

    We treat an analytically solvable version of the ``Hooke's Law'' model for a two-electron atom, in which the electron-electron repulsion is Coulombic but the electron-nucleus attraction is replaced by a harmonic oscillator potential. Exact expressions are obtained for the ground-state wave function and electron density, the Hartree-Fock solution, the correlation energy, the Kohn-Sham orbital, and, by inversion, the exchange and correlation functionals. These functionals pertain to the ``intermediate'' density regime (rs≥1.4) for an electron gas. As a test of customary approximations employed in density functional theory, we compare our exact density, exchange, and correlation potentials and energies with results from two approximations. These use Becke's exchange functional and either the Lee-Yang-Parr or the Perdew correlation functional. Both approximations yield rather good results for the density and the exchange and correlation energies, but both deviate markedly from the exact exchange and correlation potentials. We also compare properties of the Hooke's Law model with those of two-electron atoms, including the large dimension limit. A renormalization procedure applied to this very simple limit yields correlation energies as good as those obtained from the approximate functionals, for both the model and actual atoms.

  6. Ionospheric topside models compared with experimental electron density profiles

    Directory of Open Access Journals (Sweden)

    S. M. Radicella

    2005-06-01

    Full Text Available Recently an increasing number of topside electron density profiles has been made available to the scientific community on the Internet. These data are important for ionospheric modeling purposes, since the experimental information on the electron density above the ionosphere maximum of ionization is very scarce. The present work compares NeQuick and IRI models with the topside electron density profiles available in the databases of the ISIS2, IK19 and Cosmos 1809 satellites. Experimental electron content from the F2 peak up to satellite height and electron densities at fixed heights above the peak have been compared under a wide range of different conditions. The analysis performed points out the behavior of the models and the improvements needed to be assessed to have a better reproduction of the experimental results. NeQuick topside is a modified Epstein layer, with thickness parameter determined by an empirical relation. It appears that its performance is strongly affected by this parameter, indicating the need for improvements of its formulation. IRI topside is based on Booker's approach to consider two parts with constant height gradients. It appears that this formulation leads to an overestimation of the electron density in the upper part of the profiles, and overestimation of TEC.

  7. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  8. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation.

    Science.gov (United States)

    Hennies, Nora; Lambon Ralph, Matthew A; Kempkes, Marleen; Cousins, James N; Lewis, Penelope A

    2016-03-30

    Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. Copyright © 2016 Hennies et al.

  10. Computational modeling of bone density profiles in response to gait: a subject-specific approach.

    Science.gov (United States)

    Pang, Henry; Shiwalkar, Abhishek P; Madormo, Chris M; Taylor, Rebecca E; Andriacchi, Thomas P; Kuhl, Ellen

    2012-03-01

    The goal of this study is to explore the potential of computational growth models to predict bone density profiles in the proximal tibia in response to gait-induced loading. From a modeling point of view, we design a finite element-based computational algorithm using the theory of open system thermodynamics. In this algorithm, the biological problem, the balance of mass, is solved locally on the integration point level, while the mechanical problem, the balance of linear momentum, is solved globally on the node point level. Specifically, the local bone mineral density is treated as an internal variable, which is allowed to change in response to mechanical loading. From an experimental point of view, we perform a subject-specific gait analysis to identify the relevant forces during walking using an inverse dynamics approach. These forces are directly applied as loads in the finite element simulation. To validate the model, we take a Dual-Energy X-ray Absorptiometry scan of the subject's right knee from which we create a geometric model of the proximal tibia. For qualitative validation, we compare the computationally predicted density profiles to the bone mineral density extracted from this scan. For quantitative validation, we adopt the region of interest method and determine the density values at fourteen discrete locations using standard and custom-designed image analysis tools. Qualitatively, our two- and three-dimensional density predictions are in excellent agreement with the experimental measurements. Quantitatively, errors are less than 3% for the two-dimensional analysis and less than 10% for the three-dimensional analysis. The proposed approach has the potential to ultimately improve the long-term success of possible treatment options for chronic diseases such as osteoarthritis on a patient-specific basis by accurately addressing the complex interactions between ambulatory loads and tissue changes.

  11. Modeling of density of aqueous solutions of amino acids with the statistical associating fluid theory

    Energy Technology Data Exchange (ETDEWEB)

    Ji Peijun [College of Chemical Engineering, Beijing University of Chemical Technology, Beijing 100029 (China); Feng Wei [College of Life Science and Technology, Beijing University of Chemical Technology, Beijing 100029 (China)]. E-mail: fengwei@mail.buct.edu.cn; Tan Tianwei [College of Life Science and Technology, Beijing University of Chemical Technology, Beijing 100029 (China)

    2007-07-15

    The density of aqueous solutions of amino acids has been modeled with the statistical associating fluid theory (SAFT) equation of state. The modeling is accomplished by extending the previously developed new method to determine the SAFT parameters for amino acids. The modeled systems include {alpha}-alanine/H{sub 2}O, {beta}-alanine/H{sub 2}O, proline/H{sub 2}O, L-asparagine/H{sub 2}O, L-glutamine/H{sub 2}O, L-histidine/H{sub 2}O, serine/H{sub 2}O, glycine/H{sub 2}O, alanine/H{sub 2}O/sucrose, DL-valine/H{sub 2}O/sucrose, arginine/H{sub 2}O/sucrose, serine/H{sub 2}O/ethylene glycol, and glycine/H{sub 2}O/ethylene glycol. The density of binary solutions of amino acids has been correlated or predicted with a high precision. And then the density of multicomponent aqueous solutions of amino acids has been modeled based on the modeling results of binary systems, and a high accuracy of density calculations has been obtained. Finally, the water activities of DL-valine/H{sub 2}O, glycine/H{sub 2}O, and proline/H{sub 2}O have been predicted without using binary interaction parameters, and good results have been obtained.

  12. Predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography intensity values.

    Science.gov (United States)

    Alkhader, Mustafa; Hudieb, Malik; Khader, Yousef

    2017-01-01

    The aim of this study was to investigate the predictability of bone density at posterior mandibular implant sites using cone-beam computed tomography (CBCT) intensity values. CBCT cross-sectional images for 436 posterior mandibular implant sites were selected for the study. Using Invivo software (Anatomage, San Jose, California, USA), two observers classified the bone density into three categories: low, intermediate, and high, and CBCT intensity values were generated. Based on the consensus of the two observers, 15.6% of sites were of low bone density, 47.9% were of intermediate density, and 36.5% were of high density. Receiver-operating characteristic analysis showed that CBCT intensity values had a high predictive power for predicting high density sites (area under the curve [AUC] =0.94, P density sites (AUC = 0.81, P density sites was 218 (sensitivity = 0.77 and specificity = 0.76) and the best cut-off value for intensity to predict high density sites was 403 (sensitivity = 0.93 and specificity = 0.77). CBCT intensity values are considered useful for predicting bone density at posterior mandibular implant sites.

  13. The Indigo Molecule Revisited Again: Assessment of the Minnesota Family of Density Functionals for the Prediction of Its Maximum Absorption Wavelengths in Various Solvents

    Directory of Open Access Journals (Sweden)

    Francisco Cervantes-Navarro

    2013-01-01

    Full Text Available The Minnesota family of density functionals (M05, M05-2X, M06, M06L, M06-2X, and M06-HF were evaluated for the calculation of the UV-Vis spectra of the indigo molecule in solvents of different polarities using time-dependent density functional theory (TD-DFT and the polarized continuum model (PCM. The maximum absorption wavelengths predicted for each functional were compared with the known experimental results.

  14. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  15. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  16. The importance of bank vole density and rainy winters in predicting nephropathia epidemica incidence in Northern Sweden.

    Directory of Open Access Journals (Sweden)

    Hussein Khalil

    Full Text Available Pathogenic hantaviruses (family Bunyaviridae, genus Hantavirus are rodent-borne viruses causing hemorrhagic fever with renal syndrome (HFRS in Eurasia. In Europe, there are more than 10,000 yearly cases of nephropathia epidemica (NE, a mild form of HFRS caused by Puumala virus (PUUV. The common and widely distributed bank vole (Myodes glareolus is the host of PUUV. In this study, we aim to explain and predict NE incidence in boreal Sweden using bank vole densities. We tested whether the number of rainy days in winter contributed to variation in NE incidence. We forecast NE incidence in July 2013-June 2014 using projected autumn vole density, and then considering two climatic scenarios: 1 rain-free winter and 2 winter with many rainy days. Autumn vole density was a strong explanatory variable of NE incidence in boreal Sweden in 1990-2012 (R2 = 79%, p<0.001. Adding the number of rainy winter days improved the model (R2 = 84%, p<0.05. We report for the first time that risk of NE is higher in winters with many rainy days. Rain on snow and ground icing may block vole access to subnivean space. Seeking refuge from adverse conditions and shelter from predators, voles may infest buildings, increasing infection risk. In a rainy winter scenario, we predicted 812 NE cases in boreal Sweden, triple the number of cases predicted in a rain-free winter in 2013/2014. Our model enables identification of high risk years when preparedness in the public health sector is crucial, as a rainy winter would accentuate risk.

  17. The importance of bank vole density and rainy winters in predicting nephropathia epidemica incidence in Northern Sweden.

    Science.gov (United States)

    Khalil, Hussein; Olsson, Gert; Ecke, Frauke; Evander, Magnus; Hjertqvist, Marika; Magnusson, Magnus; Löfvenius, Mikaell Ottosson; Hörnfeldt, Birger

    2014-01-01

    Pathogenic hantaviruses (family Bunyaviridae, genus Hantavirus) are rodent-borne viruses causing hemorrhagic fever with renal syndrome (HFRS) in Eurasia. In Europe, there are more than 10,000 yearly cases of nephropathia epidemica (NE), a mild form of HFRS caused by Puumala virus (PUUV). The common and widely distributed bank vole (Myodes glareolus) is the host of PUUV. In this study, we aim to explain and predict NE incidence in boreal Sweden using bank vole densities. We tested whether the number of rainy days in winter contributed to variation in NE incidence. We forecast NE incidence in July 2013-June 2014 using projected autumn vole density, and then considering two climatic scenarios: 1) rain-free winter and 2) winter with many rainy days. Autumn vole density was a strong explanatory variable of NE incidence in boreal Sweden in 1990-2012 (R2 = 79%, p<0.001). Adding the number of rainy winter days improved the model (R2 = 84%, p<0.05). We report for the first time that risk of NE is higher in winters with many rainy days. Rain on snow and ground icing may block vole access to subnivean space. Seeking refuge from adverse conditions and shelter from predators, voles may infest buildings, increasing infection risk. In a rainy winter scenario, we predicted 812 NE cases in boreal Sweden, triple the number of cases predicted in a rain-free winter in 2013/2014. Our model enables identification of high risk years when preparedness in the public health sector is crucial, as a rainy winter would accentuate risk.

  18. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  19. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  20. The High Density Region of QCD from an Effective Model

    CERN Document Server

    De Pietri, R; Seiler, E; Stamatescu, I O

    2007-01-01

    We study the high density region of QCD within an effective model obtained in the frame of the hopping parameter expansion and choosing Polyakov-type loops as the main dynamical variables representing the fermionic matter. This model still shows the so-called sign problem, a difficulty peculiar to non-zero chemical potential, but it permits the development of algorithms which ensure a good overlap of the simulated Monte Carlo ensemble with the true one. We review the main features of the model and present results concerning the dependence of various observables on the chemical potential and on the temperature, in particular of the charge density and the Polykov loop susceptibility, which may be used to characterize the various phases expected at high baryonic density. In this way, we obtain information about the phase structure of the model and the corresponding phase transitions and cross over regions, which can be considered as hints about the behaviour of non-zero density QCD.

  1. Matter density perturbation and power spectrum in running vacuum model

    CERN Document Server

    Geng, Chao-Qiang

    2016-01-01

    We investigate the matter density perturbation $\\delta_m$ and power spectrum $P(k)$ in the running vacuum model (RVM) with the cosmological constant being a function of the Hubble parameter, given by $\\Lambda = \\Lambda_0 + 6 \\sigma H H_0+ 3\

  2. Online traffic flow model applying dynamic flow-density relation

    CERN Document Server

    Kim, Y

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic fl...

  3. Neutralino Relic Density in a Supersymmetric U(1)' Model

    CERN Document Server

    Barger, V; Langacker, P; Lee, H S; Barger, Vernon; Kao, Chung; Langacker, Paul; Lee, Hye-Sung

    2004-01-01

    We study properties of the lightest neutralino (\\chi) and calculate its cosmological relic density in a supersymmetric U(1)' model with a secluded U(1)' breaking sector (the S-model). The lightest neutralino mass is smaller than in the minimal supersymmetric standard model; for instance, m_\\chi < 100 GeV in the limit that the U(1)' gaugino mass is large compared to the electroweak scale. We find that the Z-\\chi-\\chi coupling can be enhanced due to the singlino components in the extended neutralino sector. Neutralino annihilation through the Z-resonance then reproduces the measured cold dark matter density over broad regions of the model parameter space.

  4. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  5. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  6. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  7. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  8. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  9. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  10. Prediction of melanoma metastasis by the Shields index based on lymphatic vessel density

    Directory of Open Access Journals (Sweden)

    Metcalfe Chris

    2010-05-01

    Full Text Available Abstract Background Melanoma usually presents as an initial skin lesion without evidence of metastasis. A significant proportion of patients develop subsequent local, regional or distant metastasis, sometimes many years after the initial lesion was removed. The current most effective staging method to identify early regional metastasis is sentinel lymph node biopsy (SLNB, which is invasive, not without morbidity and, while improving staging, may not improve overall survival. Lymphatic density, Breslow's thickness and the presence or absence of lymphatic invasion combined has been proposed to be a prognostic index of metastasis, by Shields et al in a patient group. Methods Here we undertook a retrospective analysis of 102 malignant melanomas from patients with more than five years follow-up to evaluate the Shields' index and compare with existing indicators. Results The Shields' index accurately predicted outcome in 90% of patients with metastases and 84% without metastases. For these, the Shields index was more predictive than thickness or lymphatic density. Alternate lymphatic measurement (hot spot analysis was also effective when combined into the Shields index in a cohort of 24 patients. Conclusions These results show the Shields index, a non-invasive analysis based on immunohistochemistry of lymphatics surrounding primary lesions that can accurately predict outcome, is a simple, useful prognostic tool in malignant melanoma.

  11. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  12. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  13. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  14. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  15. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  16. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  17. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  18. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  19. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  20. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  1. 多变量灰色模型MGM(1,n)在白纹伊蚊密度预测中的应用%Application of multivariable grey model (1, n) in prediction of aedes albopictus density

    Institute of Scientific and Technical Information of China (English)

    黄建华; 石挺丽; 陈远源; 陈少威; 张宗昀; 尹嘉熙; 陈清; 俞守义

    2016-01-01

    目的 利用伊蚊诱捕器监测法得出指标与气象资料建立多变量灰色预测模型(1,n)(multivariable grey model,MGM(1,n)),对伊蚊密度进行短期预测.方法 以广州市某农村为研究现场,使用幼虫监测法和伊蚊诱捕器监测法监测伊蚊密度,并收集同期气象资料.利用2014年7~11月的诱蚊诱卵指数(mosquito and oviposition positive index, MOI)和5项气象资料指标与布雷图指数(breteau index,BI)进行灰色关联度分析,选择关联度较大的变量建立MGM(1,n)灰色预测模型,使用12月资料验证模型预测效果.结果 各指标与BI的灰色关联序为MOI,相对湿度,最高平均气温,降雨量,平均气温和最低平均气温.采用BI与MOI建立MGM(1,2)模型,BI拟合值和实测值的平均绝对误差为9.14,平均相对误差为34.73%.而对MOI的拟合值和实测值的平均绝对误差为2.04,平均相对误差为21.44%.对12月伊蚊密度进行预测,BI预测值与实测值平均绝对误差为1.23,而MOI平均绝对误差1.43.结论 多变量灰色预测模型MGM(1,2)能对白纹伊蚊密度进行短期的预测.

  2. Phylogenetic mixture models can reduce node-density artifacts.

    Science.gov (United States)

    Venditti, Chris; Meade, Andrew; Pagel, Mark

    2008-04-01

    We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the

  3. A kinetic approach to modeling the manufacture of high density strucutral foam: Foaming and polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Mondy, Lisa Ann [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Noble, David R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Brunini, Victor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Roberts, Christine Cardinal [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Long, Kevin Nicholas [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Soehnel, Melissa Marie [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Celina, Mathias C. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Wyatt, Nicholas B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Thompson, Kyle R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA (United States); Tinsley, James

    2015-09-01

    We are studying PMDI polyurethane with a fast catalyst, such that filling and polymerization occur simultaneously. The foam is over-packed to tw ice or more of its free rise density to reach the density of interest. Our approach is to co mbine model development closely with experiments to discover new physics, to parameterize models and to validate the models once they have been developed. The model must be able to repres ent the expansion, filling, curing, and final foam properties. PMDI is chemically blown foam, wh ere carbon dioxide is pr oduced via the reaction of water and isocyanate. The isocyanate also re acts with polyol in a competing reaction, which produces the polymer. A new kinetic model is developed and implemented, which follows a simplified mathematical formalism that decouple s these two reactions. The model predicts the polymerization reaction via condensation chemis try, where vitrification and glass transition temperature evolution must be included to correctly predict this quantity. The foam gas generation kinetics are determined by tracking the molar concentration of both water and carbon dioxide. Understanding the therma l history and loads on the foam due to exothermicity and oven heating is very important to the results, since the kinetics and ma terial properties are all very sensitive to temperature. The conservation eq uations, including the e quations of motion, an energy balance, and thr ee rate equations are solved via a stabilized finite element method. We assume generalized-Newtonian rheology that is dependent on the cure, gas fraction, and temperature. The conservation equations are comb ined with a level set method to determine the location of the free surface over time. Results from the model are compared to experimental flow visualization data and post-te st CT data for the density. Seve ral geometries are investigated including a mock encapsulation part, two configur ations of a mock stru ctural part, and a bar geometry to

  4. Single crystal plasticity by modeling dislocation density rate behavior

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Benjamin L [Los Alamos National Laboratory; Bronkhorst, Curt [Los Alamos National Laboratory; Beyerlein, Irene [Los Alamos National Laboratory; Cerreta, E. K. [Los Alamos National Laboratory; Dennis-Koller, Darcie [Los Alamos National Laboratory

    2010-12-23

    The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. The elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.

  5. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  6. A local leaky-box model for the local stellar surface density-gas surface density-gas phase metallicity relation

    Science.gov (United States)

    Zhu, Guangtun Ben; Barrera-Ballesteros, Jorge K.; Heckman, Timothy M.; Zakamska, Nadia L.; Sánchez, Sebastian F.; Yan, Renbin; Brinkmann, Jonathan

    2017-07-01

    We revisit the relation between the stellar surface density, the gas surface density and the gas-phase metallicity of typical disc galaxies in the local Universe with the SDSS-IV/MaNGA survey, using the star formation rate surface density as an indicator for the gas surface density. We show that these three local parameters form a tight relationship, confirming previous works (e.g. by the PINGS and CALIFA surveys), but with a larger sample. We present a new local leaky-box model, assuming star-formation history and chemical evolution is localized except for outflowing materials. We derive closed-form solutions for the evolution of stellar surface density, gas surface density and gas-phase metallicity, and show that these parameters form a tight relation independent of initial gas density and time. We show that, with canonical values of model parameters, this predicted relation match the observed one well. In addition, we briefly describe a pathway to improving the current semi-analytic models of galaxy formation by incorporating the local leaky-box model in the cosmological context, which can potentially explain simultaneously multiple properties of Milky Way-type disc galaxies, such as the size growth and the global stellar mass-gas metallicity relation.

  7. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  8. Complex spectrum of spin models for finite-density QCD

    CERN Document Server

    Nishimura, Hiromichi; Pangeni, Kamal

    2016-01-01

    We consider the spectrum of transfer matrix eigenvalues associated with Polyakov loops in lattice QCD at strong coupling. The transfer matrix at finite density is non-Hermitian, and its eigenvalues become complex as a manifestation of the sign problem. We show that the symmetry under charge conjugation and complex conjugation ensures that the eigenvalues are either real or part of a complex conjugate pair, and the complex pairs lead to damped oscillatory behavior in Polyakov loop correlation functions, which also appeared in our previous phenomenological models using complex saddle points. We argue that this effect should be observable in lattice simulations of QCD at finite density.

  9. A biofilm model for prediction of pollutant transformation in sewers.

    Science.gov (United States)

    Jiang, Feng; Leung, Derek Hoi-Wai; Li, Shiyu; Chen, Guang-Hao; Okabe, Satoshi; van Loosdrecht, Mark C M

    2009-07-01

    This study developed a new sewer biofilm model to simulate the pollutant transformation and biofilm variation in sewers under aerobic, anoxic and anaerobic conditions. The biofilm model can describe the activities of heterotrophic, autotrophic, and sulfate-reducing bacteria (SRB) in the biofilm as well as the variations in biofilm thickness, the spatial profiles of SRB population and biofilm density. The model can describe dynamic biofilm growth, multiple biomass evolution and competitions among organic oxidation, denitrification, nitrification, sulfate reduction and sulfide oxidation in a heterogeneous biofilm growing in a sewer. The model has been extensively verified by three different approaches, including direct verification by measurement of the spatial concentration profiles of dissolved oxygen, nitrate, ammonia, and hydrogen sulfide in sewer biofilm. The spatial distribution profile of SRB in sewer biofilm was determined from the fluorescent in situ hybridization (FISH) images taken by a confocal laser scanning microscope (CLSM) and were predicted well by the model.

  10. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  11. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  12. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  13. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, C.; Plant, N.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70-90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale. ?? 2010.

  14. Habitat-based cetacean density models for the U.S. Atlantic and Gulf of Mexico

    Science.gov (United States)

    Roberts, Jason J.; Best, Benjamin D.; Mannocci, Laura; Fujioka, Ei; Halpin, Patrick N.; Palka, Debra L.; Garrison, Lance P.; Mullin, Keith D.; Cole, Timothy V. N.; Khan, Christin B.; McLellan, William A.; Pabst, D. Ann; Lockhart, Gwen G.

    2016-03-01

    Cetaceans are protected worldwide but vulnerable to incidental harm from an expanding array of human activities at sea. Managing potential hazards to these highly-mobile populations increasingly requires a detailed understanding of their seasonal distributions and habitats. Pursuant to the urgent need for this knowledge for the U.S. Atlantic and Gulf of Mexico, we integrated 23 years of aerial and shipboard cetacean surveys, linked them to environmental covariates obtained from remote sensing and ocean models, and built habitat-based density models for 26 species and 3 multi-species guilds using distance sampling methodology. In the Atlantic, for 11 well-known species, model predictions resembled seasonal movement patterns previously suggested in the literature. For these we produced monthly mean density maps. For lesser-known taxa, and in the Gulf of Mexico, where seasonal movements were less well described, we produced year-round mean density maps. The results revealed high regional differences in small delphinoid densities, confirmed the importance of the continental slope to large delphinoids and of canyons and seamounts to beaked and sperm whales, and quantified seasonal shifts in the densities of migratory baleen whales. The density maps, freely available online, are the first for these regions to be published in the peer-reviewed literature.

  15. Digestive efficiency mediated by serum calcium predicts bone mineral density in the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Jarcho, Michael R; Power, Michael L; Layne-Colon, Donna G; Tardif, Suzette D

    2013-02-01

    Two health problems have plagued captive common marmoset (Callithrix jacchus) colonies for nearly as long as those colonies have existed: marmoset wasting syndrome and metabolic bone disease. While marmoset wasting syndrome is explicitly linked to nutrient malabsorption, we propose metabolic bone disease is also linked to nutrient malabsorption, although indirectly. If animals experience negative nutrient balance chronically, critical nutrients may be taken from mineral stores such as the skeleton, thus leaving those stores depleted. We indirectly tested this prediction through an initial investigation of digestive efficiency, as measured by apparent energy digestibility, and serum parameters known to play a part in metabolic bone mineral density of captive common marmoset monkeys. In our initial study on 12 clinically healthy animals, we found a wide range of digestive efficiencies, and subjects with lower digestive efficiency had lower serum vitamin D despite having higher food intakes. A second experiment on 23 subjects including several with suspected bone disease was undertaken to measure digestive and serum parameters, with the addition of a measure of bone mineral density by dual-energy X-ray absorptiometry (DEXA). Bone mineral density was positively associated with apparent digestibility of energy, vitamin D, and serum calcium. Further, digestive efficiency was found to predict bone mineral density when mediated by serum calcium. These data indicate that a poor ability to digest and absorb nutrients leads to calcium and vitamin D insufficiency. Vitamin D absorption may be particularly critical for indoor-housed animals, as opposed to animals in a more natural setting, because vitamin D that would otherwise be synthesized via exposure to sunlight must be absorbed from their diet. If malabsorption persists, metabolic bone disease is a possible consequence in common marmosets. These findings support our hypothesis that both wasting syndrome and metabolic bone

  16. An exospheric temperature model from CHAMP thermospheric density

    Science.gov (United States)

    Weng, Libin; Lei, Jiuhou; Sutton, Eric; Dou, Xiankang; Fang, Hanxian

    2017-02-01

    In this study, the effective exospheric temperature, named as T∞, derived from thermospheric densities measured by the CHAMP satellite during 2002-2010 was utilized to develop an exospheric temperature model (ETM) with the aid of the NRLMSISE-00 model. In the ETM, the temperature variations are characterized as a function of latitude, local time, season, and solar and geomagnetic activities. The ETM is validated by the independent GRACE measurements, and it is found that T∞ and thermospheric densities from the ETM are in better agreement with the GRACE data than those from the NRLMSISE-00 model. In addition, the ETM captures well the thermospheric equatorial anomaly feature, seasonal variation, and the hemispheric asymmetry in the thermosphere.

  17. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  18. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  19. Understanding uncertainties in model-based predictions of Aedes aegypti population dynamics.

    Directory of Open Access Journals (Sweden)

    Chonggang Xu

    2010-09-01

    Full Text Available Aedes aegypti is one of the most important mosquito vectors of human disease. The development of spatial models for Ae. aegypti provides a promising start toward model-guided vector control and risk assessment, but this will only be possible if models make reliable predictions. The reliability of model predictions is affected by specific sources of uncertainty in the model.This study quantifies uncertainties in the predicted mosquito population dynamics at the community level (a cluster of 612 houses and the individual-house level based on Skeeter Buster, a spatial model of Ae. aegypti, for the city of Iquitos, Peru. The study considers two types of uncertainty: 1 uncertainty in the estimates of 67 parameters that describe mosquito biology and life history, and 2 uncertainty due to environmental and demographic stochasticity. Our results show that for pupal density and for female adult density at the community level, respectively, the 95% prediction confidence interval ranges from 1000 to 3000 and from 700 to 5,000 individuals. The two parameters contributing most to the uncertainties in predicted population densities at both individual-house and community levels are the female adult survival rate and a coefficient determining weight loss due to energy used in metabolism at the larval stage (i.e. metabolic weight loss. Compared to parametric uncertainty, stochastic uncertainty is relatively low for population density predictions at the community level (less than 5% of the overall uncertainty but is substantially higher for predictions at the individual-house level (larger than 40% of the overall uncertainty. Uncertainty in mosquito spatial dispersal has little effect on population density predictions at the community level but is important for the prediction of spatial clustering at the individual-house level.This is the first systematic uncertainty analysis of a detailed Ae. aegypti population dynamics model and provides an approach for

  20. Understanding uncertainties in model-based predictions of Aedes aegypti population dynamics.

    Science.gov (United States)

    Xu, Chonggang; Legros, Mathieu; Gould, Fred; Lloyd, Alun L

    2010-09-28

    Aedes aegypti is one of the most important mosquito vectors of human disease. The development of spatial models for Ae. aegypti provides a promising start toward model-guided vector control and risk assessment, but this will only be possible if models make reliable predictions. The reliability of model predictions is affected by specific sources of uncertainty in the model. This study quantifies uncertainties in the predicted mosquito population dynamics at the community level (a cluster of 612 houses) and the individual-house level based on Skeeter Buster, a spatial model of Ae. aegypti, for the city of Iquitos, Peru. The study considers two types of uncertainty: 1) uncertainty in the estimates of 67 parameters that describe mosquito biology and life history, and 2) uncertainty due to environmental and demographic stochasticity. Our results show that for pupal density and for female adult density at the community level, respectively, the 95% prediction confidence interval ranges from 1000 to 3000 and from 700 to 5,000 individuals. The two parameters contributing most to the uncertainties in predicted population densities at both individual-house and community levels are the female adult survival rate and a coefficient determining weight loss due to energy used in metabolism at the larval stage (i.e. metabolic weight loss). Compared to parametric uncertainty, stochastic uncertainty is relatively low for population density predictions at the community level (less than 5% of the overall uncertainty) but is substantially higher for predictions at the individual-house level (larger than 40% of the overall uncertainty). Uncertainty in mosquito spatial dispersal has little effect on population density predictions at the community level but is important for the prediction of spatial clustering at the individual-house level. This is the first systematic uncertainty analysis of a detailed Ae. aegypti population dynamics model and provides an approach for identifying those

  1. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  2. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  3. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  5. A joint calibration model for combining predictive distributions

    Directory of Open Access Journals (Sweden)

    Patrizia Agati

    2013-05-01

    Full Text Available In many research fields, as for example in probabilistic weather forecasting, valuable predictive information about a future random phenomenon may come from several, possibly heterogeneous, sources. Forecast combining methods have been developed over the years in order to deal with ensembles of sources: the aim is to combine several predictions in such a way to improve forecast accuracy and reduce risk of bad forecasts.In this context, we propose the use of a Bayesian approach to information combining, which consists in treating the predictive probability density functions (pdfs from the individual ensemble members as data in a Bayesian updating problem. The likelihood function is shown to be proportional to the product of the pdfs, adjusted by a joint “calibration function” describing the predicting skill of the sources (Morris, 1977. In this paper, after rephrasing Morris’ algorithm in a predictive context, we propose to model the calibration function in terms of bias, scale and correlation and to estimate its parameters according to the least squares criterion. The performance of our method is investigated and compared with that of Bayesian Model Averaging (Raftery, 2005 on simulated data.

  6. Effect of energetic oxygen atoms on neutral density models.

    Science.gov (United States)

    Rohrbaugh, R. P.; Nisbet, J. S.

    1973-01-01

    The dissociative recombination of O2(+) and NO(+) in the F region results in the production of atomic oxygen and atomic nitrogen with substantially greater kinetic energy than the ambient atoms. In the exosphere these energetic atoms have long free paths. They can ascend to altitudes of several thousand kilometers and can travel horizontally to distances of the order of the earth's radius. The distribution of energetic oxygen atoms is derived by means of models of the ion and neutral densities for quiet and disturbed solar conditions. A distribution technique is used to study the motion of the atoms in the collision-dominated region. Ballistic trajectories are calculated in the spherical gravitational field of the earth. The present calculations show that the number densities of energetic oxygen atoms predominate over the ambient atomic oxygen densities above 1000 km under quiet solar conditions and above 1600 km under disturbed solar conditions.

  7. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  8. Density Forecasts of Crude-Oil Prices Using Option-Implied and ARCH-Type Models

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas; Høg, Esben

      The predictive accuracy of competing crude-oil price forecast densities is investigated for the 1994-2006 period. Moving beyond standard ARCH models that rely exclusively on past returns, we examine the benefits of utilizing the forward-looking information that is embedded in the prices...... of derivative contracts. Risk-neutral densities, obtained from panels of crude-oil option prices, are adjusted to reflect real-world risks using either a parametric or a non-parametric calibration approach. The relative performance of the models is evaluated for the entire support of the density, as well...... obtained by option prices and non-parametric calibration methods over those constructed using historical returns and simulated ARCH processes....

  9. Hyoid bone fusion and bone density across the lifespan: prediction of age and sex.

    Science.gov (United States)

    Fisher, Ellie; Austin, Diane; Werner, Helen M; Chuang, Ying Ji; Bersu, Edward; Vorperian, Houri K

    2016-06-01

    The hyoid bone supports the important functions of swallowing and speech. At birth, the hyoid bone consists of a central body and pairs of right and left lesser and greater cornua. Fusion of the greater cornua with the body normally occurs in adulthood, but may not occur at all in some individuals. The aim of this study was to quantify hyoid bone fusion across the lifespan, as well as assess developmental changes in hyoid bone density. Using a computed tomography imaging studies database, 136 hyoid bones (66 male, 70 female, ages 1-to-94) were examined. Fusion was ranked on each side and hyoid bones were classified into one of four fusion categories based on their bilateral ranks: bilateral distant non-fusion, bilateral non-fusion, partial or unilateral fusion, and bilateral fusion. Three-dimensional hyoid bone models were created and used to calculate bone density in Hounsfield units. Results showed a wide range of variability in the timing and degree of hyoid bone fusion, with a trend for bilateral non-fusion to decrease after age 20. Hyoid bone density was significantly lower in adult female scans than adult male scans and decreased with age in adulthood. In sex and age estimation models, bone density was a significant predictor of sex. Both fusion category and bone density were significant predictors of age group for adult females. This study provides a developmental baseline for understanding hyoid bone fusion and bone density in typically developing individuals. Findings have implications for the disciplines of forensics, anatomy, speech pathology, and anthropology.

  10. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  11. Insights into plant size-density relationships from models and agricultural crops.

    Science.gov (United States)

    Deng, Jianming; Zuo, Wenyun; Wang, Zhiqiang; Fan, Zhexuan; Ji, Mingfei; Wang, Genxuan; Ran, Jinzhi; Zhao, Changming; Liu, Jianquan; Niklas, Karl J; Hammond, Sean T; Brown, James H

    2012-05-29

    There is general agreement that competition for resources results in a tradeoff between plant mass, M, and density, but the mathematical form of the resulting thinning relationship and the mechanisms that generate it are debated. Here, we evaluate two complementary models, one based on the space-filling properties of canopy geometry and the other on the metabolic basis of resource use. For densely packed stands, both models predict that density scales as M(-3/4), energy use as M(0), and total biomass as M(1/4). Compilation and analysis of data from 183 populations of herbaceous crop species, 473 stands of managed tree plantations, and 13 populations of bamboo gave four major results: (i) At low initial planting densities, crops grew at similar rates, did not come into contact, and attained similar mature sizes; (ii) at higher initial densities, crops grew until neighboring plants came into contact, growth ceased as a result of competition for limited resources, and a tradeoff between density and size resulted in critical density scaling as M(-0.78), total resource use as M(-0.02), and total biomass as M(0.22); (iii) these scaling exponents are very close to the predicted values of M(-3/4), M(0), and M(1/4), respectively, and significantly different from the exponents suggested by some earlier studies; and (iv) our data extend previously documented scaling relationships for trees in natural forests to small herbaceous annual crops. These results provide a quantitative, predictive framework with important implications for the basic and applied plant sciences.

  12. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  13. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  14. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  15. STUDY OF RED TIDE PREDICTION MODEL FOR THE CHANGJIANG ESTUARY

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper based on field data (on red tide water quality monitoring at the Changjiang River mouth and Hutoudu mariculture area in Zhejiang Province from May to August in 1995, and May to September in 1996) presents an effective model for short term prediction of red tide in the Changjiang Estuary. The measured parameters include: depth, temperature, color diaphaneity, density, DO, COD and nutrients (PO4-P, NO2-N, NO3-N, NH4-N). The model was checked by field-test data, and compared with other related models.The model: Z=SAL-3.95 DO-2.974 PH-5.421 PO4-P is suitable for application to the Shengsi aquiculture area near the Changjiang Estuary.

  16. Error estimates for density-functional theory predictions of surface energy and work function

    Science.gov (United States)

    De Waele, Sam; Lejaeghere, Kurt; Sluydts, Michael; Cottenier, Stefaan

    2016-12-01

    Density-functional theory (DFT) predictions of materials properties are becoming ever more widespread. With increased use comes the demand for estimates of the accuracy of DFT results. In view of the importance of reliable surface properties, this work calculates surface energies and work functions for a large and diverse test set of crystalline solids. They are compared to experimental values by performing a linear regression, which results in a measure of the predictable and material-specific error of the theoretical result. Two of the most prevalent functionals, the local density approximation (LDA) and the Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA), are evaluated and compared. Both LDA and GGA-PBE are found to yield accurate work functions with error bars below 0.3 eV, rivaling the experimental precision. LDA also provides satisfactory estimates for the surface energy with error bars smaller than 10%, but GGA-PBE significantly underestimates the surface energy for materials with a large correlation energy.

  17. Density-Corrected Models for Gas Diffusivity and Air Permeability in Unsaturated Soil

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    Accurate prediction of gas diffusivity (Dp/Do) and air permeability (ka) and their variations with air-filled porosity (e) in soil is critical for simulating subsurface migration and emission of climate gases and organic vapors. Gas diffusivity and air permeability measurements from Danish soil...... in subsurface soil. The data were regrouped into four categories based on compaction (total porosity F 0.4 m3 m-3) and soil texture (volume-based content of clay, silt, and organic matter 15%). The results suggested that soil compaction more than soil type was the major control on gas...... diffusivity and to some extent also on air permeability. We developed a density-corrected (D-C) Dp(e)/Do model as a generalized form of a previous model for Dp/ Do at -100 cm H2O of matric potential (Dp,100/Do). The D-C model performed well across soil types and density levels compared with existing models...

  18. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  19. Modelling of the internal dynamics and density in a tens of joules plasma focus device

    Energy Technology Data Exchange (ETDEWEB)

    Marquez, Ariel [CNEA and Instituto Balseiro, 8402 Bariloche (Argentina); Gonzalez, Jose [INVAP-CONICET and Instituto Balseiro, 8402 Bariloche, Argentina. (Argentina); Tarifeno-Saldivia, Ariel; Pavez, Cristian; Soto, Leopoldo [CCHEN, Comision Chilena de Energia Nuclear, Casilla 188-D, Santiago (Chile); Center for Research and Applications in Plasma Physics and Pulsed Power, P4 (Chile); Clausse, Alejandro [CNEA-CONICET and Universidad Nacional del Centro, 7000 Tandil (Argentina)

    2012-01-15

    Using MHD theory, coupled differential equations were generated using a lumped parameter model to describe the internal behaviour of the pinch compression phase in plasma focus discharges. In order to provide these equations with appropriate initial conditions, the modelling of previous phases was included by describing the plasma sheath as planar shockwaves. The equations were solved numerically, and the results were contrasted against experimental measurements performed on the device PF-50J. The model is able to predict satisfactorily the timing and the radial electron density profile at the maximum compression.

  20. Spin density waves predicted in zigzag puckered phosphorene, arsenene and antimonene nanoribbons

    Science.gov (United States)

    Wu, Xiaohua; Zhang, Xiaoli; Wang, Xianlong; Zeng, Zhi

    2016-04-01

    The pursuit of controlled magnetism in semiconductors has been a persisting goal in condensed matter physics. Recently, Vene (phosphorene, arsenene and antimonene) has been predicted as a new class of 2D-semiconductor with suitable band gap and high carrier mobility. In this work, we investigate the edge magnetism in zigzag puckered Vene nanoribbons (ZVNRs) based on the density functional theory. The band structures of ZVNRs show half-filled bands crossing the Fermi level at the midpoint of reciprocal lattice vectors, indicating a strong Peierls instability. To remove this instability, we consider two different mechanisms, namely, spin density wave (SDW) caused by electron-electron interaction and charge density wave (CDW) caused by electron-phonon coupling. We have found that an antiferromagnetic Mott-insulating state defined by SDW is the ground state of ZVNRs. In particular, SDW in ZVNRs displays several surprising characteristics:1) comparing with other nanoribbon systems, their magnetic moments are antiparallelly arranged at each zigzag edge and almost independent on the width of nanoribbons; 2) comparing with other SDW systems, its magnetic moments and band gap of SDW are unexpectedly large, indicating a higher SDW transition temperature in ZVNRs; 3) SDW can be effectively modified by strains and charge doping, which indicates that ZVNRs have bright prospects in nanoelectronic device.

  1. Using Tree Detection Algorithms to Predict Stand Sapwood Area, Basal Area and Stocking Density in Eucalyptus regnans Forest

    Directory of Open Access Journals (Sweden)

    Dominik Jaskierniak

    2015-06-01

    Full Text Available Managers of forested water supply catchments require efficient and accurate methods to quantify changes in forest water use due to changes in forest structure and density after disturbance. Using Light Detection and Ranging (LiDAR data with as few as 0.9 pulses m−2, we applied a local maximum filtering (LMF method and normalised cut (NCut algorithm to predict stocking density (SDen of a 69-year-old Eucalyptus regnans forest comprising 251 plots with resolution of the order of 0.04 ha. Using the NCut method we predicted basal area (BAHa per hectare and sapwood area (SAHa per hectare, a well-established proxy for transpiration. Sapwood area was also indirectly estimated with allometric relationships dependent on LiDAR derived SDen and BAHa using a computationally efficient procedure. The individual tree detection (ITD rates for the LMF and NCut methods respectively had 72% and 68% of stems correctly identified, 25% and 20% of stems missed, and 2% and 12% of stems over-segmented. The significantly higher computational requirement of the NCut algorithm makes the LMF method more suitable for predicting SDen across large forested areas. Using NCut derived ITD segments, observed versus predicted stand BAHa had R2 ranging from 0.70 to 0.98 across six catchments, whereas a generalised parsimonious model applied to all sites used the portion of hits greater than 37 m in height (PH37 to explain 68% of BAHa. For extrapolating one ha resolution SAHa estimates across large forested catchments, we found that directly relating SAHa to NCut derived LiDAR indices (R2 = 0.56 was slightly more accurate but computationally more demanding than indirect estimates of SAHa using allometric relationships consisting of BAHa (R2 = 0.50 or a sapwood perimeter index, defined as (BAHaSDen½ (R2 = 0.48.

  2. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  3. Three-dimensional model for multi-component reactive transport with variable density groundwater flow

    Science.gov (United States)

    Mao, X.; Prommer, H.; Barry, D.A.; Langevin, C.D.; Panteleit, B.; Li, L.

    2006-01-01

    PHWAT is a new model that couples a geochemical reaction model (PHREEQC-2) with a density-dependent groundwater flow and solute transport model (SEAWAT) using the split-operator approach. PHWAT was developed to simulate multi-component reactive transport in variable density groundwater flow. Fluid density in PHWAT depends not on only the concentration of a single species as in SEAWAT, but also the concentrations of other dissolved chemicals that can be subject to reactive processes. Simulation results of PHWAT and PHREEQC-2 were compared in their predictions of effluent concentration from a column experiment. Both models produced identical results, showing that PHWAT has correctly coupled the sub-packages. PHWAT was then applied to the simulation of a tank experiment in which seawater intrusion was accompanied by cation exchange. The density dependence of the intrusion and the snow-plough effect in the breakthrough curves were reflected in the model simulations, which were in good agreement with the measured breakthrough data. Comparison simulations that, in turn, excluded density effects and reactions allowed us to quantify the marked effect of ignoring these processes. Next, we explored numerical issues involved in the practical application of PHWAT using the example of a dense plume flowing into a tank containing fresh water. It was shown that PHWAT could model physically unstable flow and that numerical instabilities were suppressed. Physical instability developed in the model in accordance with the increase of the modified Rayleigh number for density-dependent flow, in agreement with previous research. ?? 2004 Elsevier Ltd. All rights reserved.

  4. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  5. Improved forecasting of thermospheric densities using multi-model ensembles

    Science.gov (United States)

    Elvidge, Sean; Godinez, Humberto C.; Angling, Matthew J.

    2016-07-01

    This paper presents the first known application of multi-model ensembles to the forecasting of the thermosphere. A multi-model ensemble (MME) is a method for combining different, independent models. The main advantage of using an MME is to reduce the effect of model errors and bias, since it is expected that the model errors will, at least partly, cancel. The MME, with its reduced uncertainties, can then be used as the initial conditions in a physics-based thermosphere model for forecasting. This should increase the forecast skill since a reduction in the errors of the initial conditions of a model generally increases model skill. In this paper the Thermosphere-Ionosphere Electrodynamic General Circulation Model (TIE-GCM), the US Naval Research Laboratory Mass Spectrometer and Incoherent Scatter radar Exosphere 2000 (NRLMSISE-00), and Global Ionosphere-Thermosphere Model (GITM) have been used to construct the MME. As well as comparisons between the MMEs and the "standard" runs of the model, the MME densities have been propagated forward in time using the TIE-GCM. It is shown that thermospheric forecasts of up to 6 h, using the MME, have a reduction in the root mean square error of greater than 60 %. The paper also highlights differences in model performance between times of solar minimum and maximum.

  6. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    Science.gov (United States)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  7. Dose prediction accuracy of anisotropic analytical algorithm and pencil beam convolution algorithm beyond high density heterogeneity interface

    Directory of Open Access Journals (Sweden)

    Suresh B Rana

    2013-01-01

    Full Text Available Purpose: It is well known that photon beam radiation therapy requires dose calculation algorithms. The objective of this study was to measure and assess the ability of pencil beam convolution (PBC and anisotropic analytical algorithm (AAA to predict doses beyond high density heterogeneity. Materials and Methods: An inhomogeneous phantom of five layers was created in Eclipse planning system (version 8.6.15. Each layer of phantom was assigned in terms of water (first or top, air (second, water (third, bone (fourth, and water (fifth or bottom medium. Depth doses in water (bottom medium were calculated for 100 monitor units (MUs with 6 Megavoltage (MV photon beam for different field sizes using AAA and PBC with heterogeneity correction. Combinations of solid water, Poly Vinyl Chloride (PVC, and Styrofoam were then manufactured to mimic phantoms and doses for 100 MUs were acquired with cylindrical ionization chamber at selected depths beyond high density heterogeneity interface. The measured and calculated depth doses were then compared. Results: AAA′s values had better agreement with measurements at all measured depths. Dose overestimation by AAA (up to 5.3% and by PBC (up to 6.7% was found to be higher in proximity to the high-density heterogeneity interface, and the dose discrepancies were more pronounced for larger field sizes. The errors in dose estimation by AAA and PBC may be due to improper beam modeling of primary beam attenuation or lateral scatter contributions or combination of both in heterogeneous media that include low and high density materials. Conclusions: AAA is more accurate than PBC for dose calculations in treating deep-seated tumor beyond high-density heterogeneity interface.

  8. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  9. Friedberg-Lee model at finite temperature and density

    Science.gov (United States)

    Mao, Hong; Yao, Minjie; Zhao, Wei-Qin

    2008-06-01

    The Friedberg-Lee model is studied at finite temperature and density. By using the finite temperature field theory, the effective potential of the Friedberg-Lee model and the bag constant B(T) and B(T,μ) have been calculated at different temperatures and densities. It is shown that there is a critical temperature TC≃106.6 MeV when μ=0 MeV and a critical chemical potential μ≃223.1 MeV for fixing the temperature at T=50 MeV. We also calculate the soliton solutions of the Friedberg-Lee model at finite temperature and density. It turns out that when T⩽TC (or μ⩽μC), there is a bag constant B(T) [or B(T,μ)] and the soliton solutions are stable. However, when T>TC (or μ>μC) the bag constant B(T)=0 MeV [or B(T,μ)=0 MeV] and there is no soliton solution anymore, therefore, the confinement of quarks disappears quickly.

  10. The Friedberg-Lee model at finite temperature and density

    CERN Document Server

    Mao, Hong; Zhao, Wei-Qin

    2007-01-01

    The Friedberg-Lee model is studied at finite temperature and density. By using the finite temperature field theory, the effective potential of the Friedberg-Lee model and the bag constant $B(T)$ and $B(T,\\mu)$ have been calculated at different temperatures and densities. It is shown that there is a critical temperature $T_{C}\\simeq 106.6 \\mathrm{MeV}$ when $\\mu=0 \\mathrm{MeV}$ and a critical chemical potential $\\mu \\simeq 223.1 \\mathrm{MeV}$ for fixing the temperature at $T=50 \\mathrm{MeV}$. We also calculate the soliton solutions of the Friedberg-Lee model at finite temperature and density. It turns out that when $T\\leq T_{C}$ (or $\\mu \\leq \\mu_C$), there is a bag constant $B(T)$ (or $B(T,\\mu)$) and the soliton solutions are stable. However, when $T>T_{C}$ (or $\\mu>\\mu_C$) the bag constant $B(T)=0 \\mathrm{MeV}$ (or $B(T,\\mu)=0 \\mathrm{MeV}$) and there is no soliton solution anymore, therefore, the confinement of quarks disappears quickly.

  11. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  12. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  13. Secretome Prediction of Two M. tuberculosis Clinical Isolates Reveals Their High Antigenic Density and Potential Drug Targets

    Science.gov (United States)

    Cornejo-Granados, Fernanda; Zatarain-Barrón, Zyanya L.; Cantu-Robles, Vito A.; Mendoza-Vargas, Alfredo; Molina-Romero, Camilo; Sánchez, Filiberto; Del Pozo-Yauner, Luis; Hernández-Pando, Rogelio; Ochoa-Leyva, Adrián

    2017-01-01

    The Excreted/Secreted (ES) proteins play important roles during Mycobacterium tuberculosis invasion, virulence, and survival inside the host and they are a major source of immunogenic proteins. However, the molecular complexity of the bacillus cell wall has made difficult the experimental isolation of the total bacterial ES proteins. Here, we reported the genomes of two Beijing genotype M. tuberculosis clinical isolates obtained from patients from Vietnam (isolate 46) and South Africa (isolate 48). We developed a bioinformatics pipeline to predict their secretomes and observed that ~12% of the genome-encoded proteins are ES, being PE, PE-PGRS, and PPE the most abundant protein domains. Additionally, the Gene Ontology, KEGG pathways and Enzyme Classes annotations supported the expected functions for the secretomes. The ~70% of an experimental secretome compiled from literature was contained in our predicted secretomes, while only the 34–41% of the experimental secretome was contained in the two previously reported secretomes for H37Rv. These results suggest that our bioinformatics pipeline is better to predict a more complete set of ES proteins in M. tuberculosis genomes. The predicted ES proteins showed a significant higher antigenic density measured by Abundance of Antigenic Regions (AAR) value than the non-ES proteins and also compared to random constructed secretomes. Additionally, we predicted the secretomes for H37Rv, H37Ra, and two M. bovis BCG genomes. The antigenic density for BGG and for isolates 46 and 48 was higher than the observed for H37Rv and H37Ra secretomes. In addition, two sets of immunogenic proteins previously reported in patients with tuberculosis also showed a high antigenic density. Interestingly, mice infected with isolate 46 showed a significant lower survival rate than the ones infected with isolate 48 and both survival rates were lower than the one previously reported for the H37Rv in the same murine model. Finally, after a

  14. High-resolution modeling of the cusp density anomaly: Response to particle and Joule heating under typical conditions

    Science.gov (United States)

    Brinkman, Douglas G.; Walterscheid, Richard L.; Clemmons, James H.; Hecht, James. H.

    2016-03-01

    An established high-resolution dynamical model is employed to understand the behavior of the thermosphere beneath the Earth's magnetic cusps, with emphasis on the factors contributing to the density structures observed by the CHAMP and Streak satellite missions. In contrast to previous modeling efforts, this approach combines first principles dynamical modeling with the high spatial resolution needed to describe accurately mesoscale features such as the cusp. The resulting density structure is shown to be consistent with observations, including regions of both enhanced and diminished neutral density along the satellite track. This agreement is shown to be the result of a straightforward application of input conditions commonly found in the cusp rather than exaggerated or extreme conditions. It is found that the magnitude of the density change is sensitive to the width of the cusp region and that models that can resolve widths on the order of 2° of latitude are required to predict density variations that are consistent with the observations.

  15. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  16. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  18. Enceladus Plume Density Modeling and Reconstruction for Cassini Attitude Control System

    Science.gov (United States)

    Sarani, Siamak

    2010-01-01

    In 2005, Cassini detected jets composed mostly of water, spouting from a set of nearly parallel rifts in the crust of Enceladus, an icy moon of Saturn. During an Enceladus flyby, either reaction wheels or attitude control thrusters on the Cassini spacecraft are used to overcome the external torque imparted on Cassini due to Enceladus plume or jets, as well as to slew the spacecraft in order to meet the pointing needs of the on-board science instruments. If the estimated imparted torque is larger than it can be controlled by the reaction wheel control system, thrusters are used to control the spacecraft. Having an engineering model that can predict and simulate the external torque imparted on Cassini spacecraft due to the plume density during all projected low-altitude Enceladus flybys is important. Equally important is being able to reconstruct the plume density after each flyby in order to calibrate the model. This paper describes an engineering model of the Enceladus plume density, as a function of the flyby altitude, developed for the Cassini Attitude and Articulation Control Subsystem, and novel methodologies that use guidance, navigation, and control data to estimate the external torque imparted on the spacecraft due to the Enceladus plume and jets. The plume density is determined accordingly. The methodologies described have already been used to reconstruct the plume density for three low-altitude Enceladus flybys of Cassini in 2008 and will continue to be used on all remaining low-altitude Enceladus flybys in Cassini's extended missions.

  19. Enceladus Plume Density Modeling and Reconstruction for Cassini Attitude Control System

    Science.gov (United States)

    Sarani, Siamak

    2010-01-01

    In 2005, Cassini detected jets composed mostly of water, spouting from a set of nearly parallel rifts in the crust of Enceladus, an icy moon of Saturn. During an Enceladus flyby, either reaction wheels or attitude control thrusters on the Cassini spacecraft are used to overcome the external torque imparted on Cassini due to Enceladus plume or jets, as well as to slew the spacecraft in order to meet the pointing needs of the on-board science instruments. If the estimated imparted torque is larger than it can be controlled by the reaction wheel control system, thrusters are used to control the spacecraft. Having an engineering model that can predict and simulate the external torque imparted on Cassini spacecraft due to the plume density during all projected low-altitude Enceladus flybys is important. Equally important is being able to reconstruct the plume density after each flyby in order to calibrate the model. This paper describes an engineering model of the Enceladus plume density, as a function of the flyby altitude, developed for the Cassini Attitude and Articulation Control Subsystem, and novel methodologies that use guidance, navigation, and control data to estimate the external torque imparted on the spacecraft due to the Enceladus plume and jets. The plume density is determined accordingly. The methodologies described have already been used to reconstruct the plume density for three low-altitude Enceladus flybys of Cassini in 2008 and will continue to be used on all remaining low-altitude Enceladus flybys in Cassini's extended missions.

  20. Predicting lower mantle heterogeneity from 4-D Earth models

    Science.gov (United States)

    Flament, Nicolas; Williams, Simon; Müller, Dietmar; Gurnis, Michael; Bower, Dan J.

    2016-04-01

    The Earth's lower mantle is characterized by two large-low-shear velocity provinces (LLSVPs), approximately ˜15000 km in diameter and 500-1000 km high, located under Africa and the Pacific Ocean. The spatial stability and chemical nature of these LLSVPs are debated. Here, we compare the lower mantle structure predicted by forward global mantle flow models constrained by tectonic reconstructions (Bower et al., 2015) to an analysis of five global tomography models. In the dynamic models, spanning 230 million years, slabs subducting deep into the mantle deform an initially uniform basal layer containing 2% of the volume of the mantle. Basal density, convective vigour (Rayleigh number Ra), mantle viscosity, absolute plate motions, and relative plate motions are varied in a series of model cases. We use cluster analysis to classify a set of equally-spaced points (average separation ˜0.45°) on the Earth's surface into two groups of points with similar variations in present-day temperature between 1000-2800 km depth, for each model case. Below ˜2400 km depth, this procedure reveals a high-temperature cluster in which mantle temperature is significantly larger than ambient and a low-temperature cluster in which mantle temperature is lower than ambient. The spatial extent of the high-temperature cluster is in first-order agreement with the outlines of the African and Pacific LLSVPs revealed by a similar cluster analysis of five tomography models (Lekic et al., 2012). Model success is quantified by computing the accuracy and sensitivity of the predicted temperature clusters in predicting the low-velocity cluster obtained from tomography (Lekic et al., 2012). In these cases, the accuracy varies between 0.61-0.80, where a value of 0.5 represents the random case, and the sensitivity ranges between 0.18-0.83. The largest accuracies and sensitivities are obtained for models with Ra ≈ 5 x 107, no asthenosphere (or an asthenosphere restricted to the oceanic domain), and a

  1. Density-corrected models for gas diffusivity and air permeability in unsaturated soil

    DEFF Research Database (Denmark)

    Deepagoda Thuduwe Kankanamge Kelum, Chamindu; Møldrup, Per; Schjønning, Per

    2011-01-01

    . Also, a power-law ka model with exponent 1.5 (derived from analogy with a previous gas diffusivity model) used in combination with the D-C approach for ka,100 (reference point) seemed promising for ka(e) predictions, with good accuracy and minimum parameter requirements. Finally, the new D-C model......Accurate prediction of gas diffusivity (Dp/Do) and air permeability (ka) and their variations with air-filled porosity (e) in soil is critical for simulating subsurface migration and emission of climate gases and organic vapors. Gas diffusivity and air permeability measurements from Danish soil...... profile data (total of 150 undisturbed soil samples) were used to investigate soil type and density effects on the gas transport parameters and for model development. The measurements were within a given range of matric potentials (-10 to -500 cm H2O) typically representing natural field conditions...

  2. The velocity-density relation in the spherical model

    CERN Document Server

    Bilicki, Maciej

    2008-01-01

    We study the cosmic velocity-density relation using the spherical collapse model (SCM) as a proxy to non-linear dynamics. Although the dependence of this relation on cosmological parameters is known to be weak, we retain the density parameter Omega_m in SCM equations, in order to study the limit Omega_m -> 0. We show that in this regime the considered relation is strictly linear, for arbitrary values of the density contrast, on the contrary to some claims in the literature. On the other hand, we confirm that for realistic values of Omega_m the exact relation in the SCM is well approximated by the classic formula of Bernardeau (1992), both for voids (delta<0) and for overdensities up to delta ~ 3. Inspired by this fact, we find further analytic approximations to the relation for the whole range delta from -1 to infinity. Our formula for voids accounts for the weak Omega_m-dependence of their maximal rate of expansion, which for Omega_m < 1 is slightly smaller that 3/2. For positive density contrasts, we ...

  3. Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greg J. Shott, Vefa Yucel, Lloyd Desotell

    2007-06-01

    Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.

  4. Modeling the impact of the indigenous microbial population on the maximum population density of Salmonella on alfalfa

    NARCIS (Netherlands)

    Rijgersberg, H.; Nierop Groot, M.N.; Tromp, S.O.; Franz, E.

    2013-01-01

    Within a microbial risk assessment framework, modeling the maximum population density (MPD) of a pathogenic microorganism is important but often not considered. This paper describes a model predicting the MPD of Salmonella on alfalfa as a function of the initial contamination level, the total count

  5. A predictive standard model for heavy electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yifeng [Los Alamos National Laboratory; Curro, N J [UC DAVIS; Fisk, Z [UC DAVIS; Pines, D [UC DAVIS

    2010-01-01

    We propose a predictive standard model for heavy electron systems based on a detailed phenomenological two-fluid description of existing experimental data. It leads to a new phase diagram that replaces the Doniach picture, describes the emergent anomalous scaling behavior of the heavy electron (Kondo) liquid measured below the lattice coherence temperature, T*, seen by many different experimental probes, that marks the onset of collective hybridization, and enables one to obtain important information on quantum criticality and the superconducting/antiferromagnetic states at low temperatures. Because T* is {approx} J{sup 2} {rho}/2, the nearest neighbor RKKY interaction, a knowledge of the single-ion Kondo coupling, J, to the background conduction electron density of states, {rho}, makes it possible to predict Kondo liquid behavior, and to estimate its maximum superconducting transition temperature in both existing and newly discovered heavy electron families.

  6. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  7. Predicting plants -modeling traits as a function of environment

    Science.gov (United States)

    Franklin, Oskar

    2016-04-01

    A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits

  8. Equation of state density models for hydrocarbons in ultradeep reservoirs at extreme temperature and pressure conditions

    Science.gov (United States)

    Wu, Yue; Bamgbade, Babatunde A.; Burgess, Ward A.; Tapriyal, Deepak; Baled, Hseen O.; Enick, Robert M.; McHugh, Mark A.

    2013-10-01

    The necessity of exploring ultradeep reservoirs requires the accurate prediction of hydrocarbon density data at extreme temperatures and pressures. In this study, three equations of state (EoS) models, Peng-Robinson (PR), high-temperature high-pressure volume-translated PR (HTHP VT-PR), and perturbed-chain statistical associating fluid theory (PC-SAFT) EoS are used to predict the density data for hydrocarbons in ultradeep reservoirs at temperatures to 523 K and pressures to 275 MPa. The calculated values are compared with experimental data. The results show that the HTHP VT-PR EoS and PC-SAFT EoS always perform better than the regular PR EoS for all the investigated hydrocarbons.

  9. Predicting available water of soil from particle-size distribution and bulk density in an oasis-desert transect in northwestern China

    Science.gov (United States)

    Li, Danfeng; Gao, Guangyao; Shao, Ming'an; Fu, Bojie

    2016-07-01

    A detailed understanding of soil hydraulic properties, particularly the available water content of soil, (AW, cm3 cm-3), is required for optimal water management. Direct measurement of soil hydraulic properties is impractical for large scale application, but routinely available soil particle-size distribution (PSD) and bulk density can be used as proxies to develop various prediction functions. In this study, we compared the performance of the Arya and Paris (AP) model, Mohammadi and Vanclooster (MV) model, Arya and Heitman (AH) model, and Rosetta program in predicting the soil water characteristic curve (SWCC) at 34 points with experimental SWCC data in an oasis-desert transect (20 × 5 km) in the middle reaches of the Heihe River basin, northwestern China. The idea of the three models emerges from the similarity of the shapes of the PSD and SWCC. The AP model, MV model, and Rosetta program performed better in predicting the SWCC than the AH model. The AW determined from the SWCCs predicted by the MV model agreed better with the experimental values than those derived from the AP model and Rosetta program. The fine-textured soils were characterized by higher AW values, while the sandy soils had lower AW values. The MV model has the advantages of having robust physical basis, being independent of database-related parameters, and involving subclasses of texture data. These features make it promising in predicting soil water retention at regional scales, serving for the application of hydrological models and the optimization of soil water management.

  10. Vortices in gauge models at finite density with vector condensates

    CERN Document Server

    Gorbar, E V; Miransky, V A; Jia, Junji

    2006-01-01

    There exists a class of gauge models incorporating a finite density of matter in which the Higgs mechanism is provided by condensates of gauge (or gauge and scalar) fields, i.e., there are vector condensates in this case. We describe vortex solutions in the simplest model in this class, the gauged $SU(2)\\times U(1)_Y$ $\\sigma$-model with the chemical potential for hypercharge $Y$, in which the gauge symmetry is completely broken. It is shown that there are three types of topologically stable vortices in the model, connected either with photon field or hypercharge gauge field, or with both of them. Explicit vortex solutions are numerically found and their energy per unit length are calculated. The relevance of these solutions for the gluonic phase in the dense two-flavor QCD is discussed.

  11. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  12. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  15. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  16. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  17. Toward a predictive model for elastomer seals

    Science.gov (United States)

    Molinari, Nicola; Khawaja, Musab; Sutton, Adrian; Mostofi, Arash

    Nitrile butadiene rubber (NBR) and hydrogenated-NBR (HNBR) are widely used elastomers, especially as seals in oil and gas applications. During exposure to well-hole conditions, ingress of gases causes degradation of performance, including mechanical failure. We use computer simulations to investigate this problem at two different length and time-scales. First, we study the solubility of gases in the elastomer using a chemically-inspired description of HNBR based on the OPLS all-atom force-field. Starting with a model of NBR, C=C double bonds are saturated with either hydrogen or intramolecular cross-links, mimicking the hydrogenation of NBR to form HNBR. We validate against trends for the mass density and glass transition temperature for HNBR as a function of cross-link density, and for NBR as a function of the fraction of acrylonitrile in the copolymer. Second, we study mechanical behaviour using a coarse-grained model that overcomes some of the length and time-scale limitations of an all-atom approach. Nanoparticle fillers added to the elastomer matrix to enhance mechanical response are also included. Our initial focus is on understanding the mechanical properties at the elevated temperatures and pressures experienced in well-hole conditions.

  18. Nonequilibrium Anderson model made simple with density functional theory

    Science.gov (United States)

    Kurth, S.; Stefanucci, G.

    2016-12-01

    The single-impurity Anderson model is studied within the i-DFT framework, a recently proposed extension of density functional theory (DFT) for the description of electron transport in the steady state. i-DFT is designed to give both the steady current and density at the impurity, and it requires the knowledge of the exchange-correlation (xc) bias and on-site potential (gate). In this work we construct an approximation for both quantities which is accurate in a wide range of temperatures, gates, and biases, thus providing a simple and unifying framework to calculate the differential conductance at negligible computational cost in different regimes. Our results mark a substantial advance for DFT and may inform the construction of functionals applicable to other correlated systems.

  19. Dynamic density functional theory of solid tumor growth: Preliminary models

    Directory of Open Access Journals (Sweden)

    Arnaud Chauviere

    2012-03-01

    Full Text Available Cancer is a disease that can be seen as a complex system whose dynamics and growth result from nonlinear processes coupled across wide ranges of spatio-temporal scales. The current mathematical modeling literature addresses issues at various scales but the development of theoretical methodologies capable of bridging gaps across scales needs further study. We present a new theoretical framework based on Dynamic Density Functional Theory (DDFT extended, for the first time, to the dynamics of living tissues by accounting for cell density correlations, different cell types, phenotypes and cell birth/death processes, in order to provide a biophysically consistent description of processes across the scales. We present an application of this approach to tumor growth.

  20. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  1. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  2. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  3. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  4. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  5. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  6. The efficiency of the RULES-4 classification learning algorithm in predicting the density of agents

    Directory of Open Access Journals (Sweden)

    Ziad Salem

    2014-12-01

    Full Text Available Learning is the act of obtaining new or modifying existing knowledge, behaviours, skills or preferences. The ability to learn is found in humans, other organisms and some machines. Learning is always based on some sort of observations or data such as examples, direct experience or instruction. This paper presents a classification algorithm to learn the density of agents in an arena based on the measurements of six proximity sensors of a combined actuator sensor units (CASUs. Rules are presented that were induced by the learning algorithm that was trained with data-sets based on the CASU’s sensor data streams collected during a number of experiments with “Bristlebots (agents in the arena (environment”. It was found that a set of rules generated by the learning algorithm is able to predict the number of bristlebots in the arena based on the CASU’s sensor readings with satisfying accuracy.

  7. Prediction of the P-leaching potential of arable soils in areas with high livestock densities

    Institute of Scientific and Technical Information of China (English)

    WERNER Wilfried; TRIMBORN Manfred; PIHL Uwe

    2006-01-01

    Due to long-term positive P-balances many surface soils in areas with high livestock density in Germany are oversupplied with available P, creating a potential for vertical P losses by leaching. In extensive studies to characterize the endangering of ground water to P pollution by chemical soil parameters it is shown that the available P content and the P concentration of the soil solution in the deeper soil layers, as indicators of the P-leaching potential, cannot be satisfactorily predicted from the available P content of the topsoils. The P equilibrium concentration in the soil solution directly above ground water table or the pipe drainage system highly depends on the relative saturation of the P-sorption capacity in this layer. A saturation index of <20% normally corresponds with Pequilibrium concentrations of <0.2 mg P/L. Phytoremediation may reduce the P leaching potential of P-enriched soils only over a very long period.

  8. Practical steady-state temperature prediction of active embedded chips into high density electronic board

    Science.gov (United States)

    Monier-Vinard, Eric; Rogie, Brice; Nguyen, Nhat-Minh; Laraqi, Najib; Bissuel, Valentin; Daniel, Olivier

    2016-09-01

    Printed Wiring Board die embedding technology is an innovative packaging alternative to address a very high degree of integration by stacking multiple core layers housing active chips. Nevertheless this increases the thermal management challenges by concentrating heat dissipation at the heart of the substrate and exacerbates the need of adequate cooling. In order to allow the electronic designers to early analyse the limits of the in-layer power dissipation, depending on the chip location inside the board, various analytical thermal modelling approaches were investigated. Therefore the buried active chips can be represented using surface or volumetric heating sources according with the expected accuracy. Moreover the current work describes the comparison of the volumetric heating source analytical model with the state-of-art numerical detailed models of several embedded chips configurations, and debates about the need or not to simulate in full details the embedded chips as well as the surrounding layers and micro-via structures of the substrate. The results highlight that the thermal behaviour predictions of the analytical model are found to be within ±5% of relative error and so demonstrate their relevance to model an embedded chip and its neighbouring heating chips or components. Further this predictive model proves to be in good agreement with an experimental characterization performed on a thermal test vehicle. To summarize, the developed analytical approach promotes several practical solutions to achieve a more efficient design and to early identify the potential issues of board cooling.

  9. Hartree-Fock and density functional complete basis set (CBS) predicted shielding anisotropy and shielding tensor components.

    Energy Technology Data Exchange (ETDEWEB)

    Kupka, T.; Ruscic, B.; Botto, R. E.; Chemistry

    2003-05-01

    The nuclear shielding anisotropy and shielding tensor components calculated using the hybrid density functional B3PW91 are reported for a model set of compounds comprised of N{sub 2}, NH{sub 3}, CH{sub 4}, C{sub 2}H{sub 4}, HCN and CH{sub 3}CN. An estimation of density functional theory (DFT) and Hartree-Fock complete basis-set limit (CBS) parameters from a 2 (3) point exact fit vs. least-squares fit was obtained with the cc-pVxZ and aug-cc-pVxZ basis sets (x=D, T, Q, 5, 6). Both Hartree-Fock- and DFT-predicted CBS shielding anisotropies and shielding tensor components of the model molecules were in reasonable agreement with available experimental data. The utility of using a limited CBS approach for calculating accurate anisotropic shielding parameters of larger molecules as complementary methods to solid-state NMR is proposed.

  10. Combined Computational Approach Based on Density Functional Theory and Artificial Neural Networks for Predicting The Solubility Parameters of Fullerenes.

    Science.gov (United States)

    Perea, J Darío; Langner, Stefan; Salvador, Michael; Kontos, Janos; Jarvas, Gabor; Winkler, Florian; Machui, Florian; Görling, Andreas; Dallos, Andras; Ameri, Tayebeh; Brabec, Christoph J

    2016-05-19

    The solubility of organic semiconductors in environmentally benign solvents is an important prerequisite for the widespread adoption of organic electronic appliances. Solubility can be determined by considering the cohesive forces in a liquid via Hansen solubility parameters (HSP). We report a numerical approach to determine the HSP of fullerenes using a mathematical tool based on artificial neural networks (ANN). ANN transforms the molecular surface charge density distribution (σ-profile) as determined by density functional theory (DFT) calculations within the framework of a continuum solvation model into solubility parameters. We validate our model with experimentally determined HSP of the fullerenes C60, PC61BM, bisPC61BM, ICMA, ICBA, and PC71BM and through comparison with previously reported molecular dynamics calculations. Most excitingly, the ANN is able to correctly predict the dispersive contributions to the solubility parameters of the fullerenes although no explicit information on the van der Waals forces is present in the σ-profile. The presented theoretical DFT calculation in combination with the ANN mathematical tool can be easily extended to other π-conjugated, electronic material classes and offers a fast and reliable toolbox for future pathways that may include the design of green ink formulations for solution-processed optoelectronic devices.

  11. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  12. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  13. Numerical modeling of oxides of nitrogen based on density of biodiesel fuels

    Directory of Open Access Journals (Sweden)

    A. Gopinath, Sukumar Puhan, G. Nagarajan

    2010-03-01

    Full Text Available Biodiesel is an alternative fuel derived from vegetable oils or animal fats. Research has shown that biodiesel fueled engines produce lesser carbon monoxide, unburned hydrocarbon, and particulate emissions compared to mineral based diesel fuel but emit higher oxides of nitrogen (NOx emissions. NOx could be strongly correlated with density or cetane number of a fuel. The objective of the present work is to predict the NOx concentration of a neat biodiesel fueled compression ignition engine from the density of biodiesel fuels using regression model. Experiments were conducted at different engine loads and the results were given as inputs to develop the regression model. A single cylinder, four stroke, constant speed, air cooled, direct injection diesel engine was used for the experiments. Five different biodiesel fuels were used and NOx were measured at different engine loads. The NOx concentration was taken as response (dependent variable and the density values were taken as explanatory (independent variables. The regression model has yielded R2 values between 0.918 and 0.995. The maximum prediction error was found to be 3.01 %.

  14. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  15. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  16. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  17. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  18. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  19. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  20. Strongly interacting matter at high densities with a soliton model

    Science.gov (United States)

    Johnson, Charles Webster

    1998-12-01

    One of the major goals of modern nuclear physics is to explore the phase diagram of strongly interacting matter. The study of these 'extreme' conditions is the primary motivation for the construction of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory which will accelerate nuclei to a center of mass (c.m.) energy of about 200 GeV/nucleon. From a theoretical perspective, a test of quantum chromodynamics (QCD) requires the expansion of the conditions examined from one phase point to the entire phase diagram of strongly-interacting matter. In the present work we focus attention on what happens when the density is increased, at low excitation energies. Experimental results from the Brookhaven Alternating Gradient Synchrotron (AGS) indicate that this regime may be tested in the 'full stopping' (maximum energy deposition) scenario achieved at the AGS having a c.m. collision energy of about 2.5 GeV/nucleon for two equal- mass heavy nuclei. Since the solution of QCD on nuclear length-scales is computationally prohibitive even on today's most powerful computers, progress in the theoretical description of high densities has come through the application of models incorporating some of the essential features of the full theory. The simplest such model is the MIT bag model. We use a significantly more sophisticated model, a nonlocal confining soliton model developed in part at Kent. This model has proven its value in the calculation of the properties of individual mesons and nucleons. In the present application, the many-soliton problem is addressed with the same model. We describe nuclear matter as a lattice of solitons and apply the Wigner-Seitz approximation to the lattice. This means that we consider spherical cells with one soliton centered in each, corresponding to the average properties of the lattice. The average density is then varied by changing the size of the Wigner-Seitz cell. To arrive at a solution, we need to solve a coupled set of

  1. A Semianalytical Model Using MODIS Data to Estimate Cell Density of Red Tide Algae (Aureococcus anophagefferens

    Directory of Open Access Journals (Sweden)

    Lingling Jiang

    2016-01-01

    Full Text Available A multiband and a single-band semianalytical model were developed to predict algae cell density distribution. The models were based on cell density (N dependent parameterizations of the spectral backscattering coefficients, bb(λ, obtained from in situ measurements. There was a strong relationship between bb(λ and N, with a minimum regression coefficient of 0.97 at 488 nm and a maximum value of 0.98 at other bands. The cell density calculated by the multiband inversion model was similar to the field measurements of the coastal waters (the average relative error was only 8.9%, but it could not accurately discern the red tide from mixed pixels, and this led to overestimation of the area affected by the red tide. While the single-band inversion model is less precise than the former model in the high chlorophyll water, it could eliminate the impact of the suspended sediments and make more accurate estimates of the red tide area. We concluded that the two models both have advantages and disadvantages; these methods lay the foundation for developing a remote sensing forecasting system for red tides.

  2. A gravitational test of wave reinforcement versus fluid density models

    Science.gov (United States)

    Johnson, Jacqueline Umstead

    1990-10-01

    Spermatozoa, protozoa, and algae form macroscopic patterns somewhat analogous to thermally driven convection cells. These bioconvective patterns have attracted interest in the fluid dynamics community, but whether in all cases these waves were gravity driven was unknown. There are two conflicting theories, one gravity dependent (fluid density model), the other gravity independent (wave reinforcement theory). The primary objectives of the summer faculty fellows were to: (1) assist in sample collection (spermatozoa) and preparation for the KC-135 research airplane experiment; and (2) to collaborate on ground testing of bioconvective variables such as motility, concentration, morphology, etc., in relation to their macroscopic patterns. Results are very briefly given.

  3. A unified model of density limit in fusion plasmas

    CERN Document Server

    Zanca, P; Escande, D F; Pucella, G; Tudisco, O

    2016-01-01

    A limit for the edge density, ruled by radiation losses from light impurities, is established by a minimal cylindrical magneto-thermal equilibrium model. For ohmic tokamak and reversed field pinch the limit scales linearly with the plasma current, as the empirical Greenwald limit. The auxiliary heating adds a further dependence, scaling with the 0.4 power, in agreement with L-mode tokamak experiments. For a purely externally heated configuration the limit takes on a Sudo-like form, depending mainly on the input power, and is compatible with recent Stellarator scalings.

  4. Predicting low bone density in children and young adults with quadriplegic cerebral palsy.

    Science.gov (United States)

    Henderson, Richard C; Kairalla, John; Abbas, Almas; Stevenson, Richard D

    2004-06-01

    Many children and young adults with cerebral palsy (CP) have diminished bone mineral density (BMD) and a propensity to fracture with minimal trauma. The aim of this study was to identify variables which are routinely assessed as part of standard clinical care and that might be used to identify those individuals with CP who are most likely to have low BMD. One hundred and seven participants (ages 2 years 1 month to 21 years 1 month; mean age 10 years 11 months, SD 4 years 2 months) with moderate to severe spastic CP were assessed in detail. This included gathering clinical data, taking anthropometric measures of growth and nutrition, as well as dual energy X-ray absorptiometry measures of BMD. Seventeen participants were ambulatory with assistance (Gross Motor Function Classification System [GMFCS] level III), and 90 were capable of little or no ambulation even with assistance (26 GMFCS level IV and 64 GMFCS level V). Weight z score proved to be the best predictor of BMD z score. Declining BMD z scores also correlated with increasing age and greater severity of involvement. It can be predicted, with reasonable reliability, that a 10-year-old non-ambulatory child with quadriplegic CP and a 'typical' weight z score of -2 will have a BMD z score that is at best -2. Prior fractures, use of anticonvulsants, and feeding difficulties further reduce predicted BMD.

  5. Can Hip Fracture Prediction in Women be Estimated beyond Bone Mineral Density Measurement Alone?

    Science.gov (United States)

    Geusens, Piet; van Geel, Tineke; van den Bergh, Joop

    2010-01-01

    The etiology of hip fractures is multifactorial and includes bone and fall-related factors. Low bone mineral density (BMD) and BMD-related and BMD-independent geometric components of bone strength, evaluated by hip strength analysis (HSA) and finite element analysis analyses on dual-energy X-ray absorptiometry (DXA) images, and ultrasound parameters are related to the presence and incidence of hip fracture. In addition, clinical risk factors contribute to the risk of hip fractures, independent of BMD. They are included in the fracture risk assessment tool (FRAX) case finding algorithm to estimate in the individual patient the 10-year risk of hip fracture, with and without BMD. Fall risks are not included in FRAX, but are included in other case finding tools, such as the Garvan algorithm, to predict the 5- and 10-year hip fracture risk. Hormones, cytokines, growth factors, markers of bone resorption and genetic background have been related to hip fracture risk. Vitamin D deficiency is endemic worldwide and low serum levels of 25-hydroxyvitamin D [25(OH)D] predict hip fracture risk. In the context of hip fracture prevention calculation of absolute fracture risk using clinical risks, BMD, bone geometry and fall-related risks is feasible, but needs further refinement by integrating bone and fall-related risk factors into a single case finding algorithm for clinical use. PMID:22870438

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. Highly correlating distance/connectivity-based topological indices 5. Accurate prediction of liquid density of organic molecules using PCR and PC-ANN.

    Science.gov (United States)

    Shamsipur, Mojtaba; Ghavami, Raouf; Sharghi, Hashem; Hemmateenejad, Bahram

    2008-11-01

    The primary goal of a quantitative structure-property relationship (QSPR) is to identify a set of structurally based numerical descriptors that can be mathematically linked to a property of interest. Recently, we proposed some new topological indices (Sh indices) based on the distance sum and connectivity of a molecular graph that derived directly from two-dimensional molecular topology for use in QSAR/QSPR studies. In this study, the ability of these indices to predict the liquid densities (rho) of a large and diverse set of organic liquid compounds (521 compounds) has been examined. Ten different Sh indices were calculated for each molecule. Both linear and non-linear modeling methods were implemented using principal component regression (PCR) and principal component-artificial neural network (PC-ANN) with back-propagation learning algorithm, respectively. Correlation ranking procedure was used to rank the principal components and entered them into the models. PCR analysis of the data showed that the proposed Sh indices could explain about 91.82% of variations in the density data, while the variations explained by the ANN modeling were more than 97.93%. The predictive ability of the models was evaluated using external test set molecules and root mean square errors of prediction of 0.0308 g ml(-1) and 0.0248 g ml(-1) were obtained for liquid densities of external compounds by linear and non-linear models, respectively.

  8. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  9. Experimental assessment of presumed filtered density function models

    Science.gov (United States)

    Stetsyuk, V.; Soulopoulos, N.; Hardalupas, Y.; Taylor, A. M. K. P.

    2015-06-01

    Measured filtered density functions (FDFs) as well as assumed beta distribution model of mixture fraction and "subgrid" scale (SGS) scalar variance z '' 2 ¯ , used typically in large eddy simulations, were studied by analysing experimental data, obtained from two-dimensional planar, laser induced fluorescence measurements in isothermal swirling turbulent flows at a constant Reynolds number of 29 000 for different swirl numbers (0.3, 0.58, and 1.07). Two-dimensional spatial filtering, by using a box filter, was performed in order to obtain the filtered variables, namely, resolved mean and "subgrid" scale scalar variance. These were used as inputs for assumed beta distribution of mixture fraction and top-hat FDF shape estimates. The presumed beta distribution model, top-hat FDF, and the measured filtered density functions were used to integrate a laminar flamelet solution in order to calculate the corresponding resolved temperature. The experimentally measured FDFs varied with the flow swirl number and both axial and radial positions in the flow. The FDFs were unimodal at flow regions with low SGS scalar variance, z '' 2 ¯ 0.02. Bimodal FDF could be observed for a filter size of approximately 1.5-2 times the Batchelor scale. Unimodal FDF could be observed for a filter size as large as four times the Batchelor scale under well-mixed conditions. In addition, two common computational models (a gradient assumption and a scale similarity model) for the SGS scalar variance were used with the aim to evaluate their validity through comparison with the experimental data. It was found that the gradient assumption model performed generally better than the scale similarity one.

  10. Models of asthma: density-equalizing mapping and output benchmarking

    Directory of Open Access Journals (Sweden)

    Fischer Tanja C

    2008-02-01

    Full Text Available Abstract Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis.

  11. Systematics of nuclear densities, deformations and excitation energies within the context of the generalized rotation-vibration model

    Energy Technology Data Exchange (ETDEWEB)

    Chamon, L.C., E-mail: luiz.chamon@dfn.if.usp.b [Departamento de Fisica Nuclear, Instituto de Fisica da Universidade de Sao Paulo, Caixa Postal 66318, 05315-970, Sao Paulo, SP (Brazil); Carlson, B.V. [Departamento de Fisica, Instituto Tecnologico de Aeronautica, Centro Tecnico Aeroespacial, Sao Jose dos Campos, SP (Brazil)

    2010-11-30

    We present a large-scale systematics of charge densities, excitation energies and deformation parameters for hundreds of heavy nuclei. The systematics is based on a generalized rotation-vibration model for the quadrupole and octupole modes and takes into account second-order contributions of the deformations as well as the effects of finite diffuseness values for the nuclear densities. We compare our results with the predictions of classical surface vibrations in the hydrodynamical approximation.

  12. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  13. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  14. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    Science.gov (United States)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  15. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  16. Critical behavior in the cubic dimer model at nonzero monomer density

    Science.gov (United States)

    Sreejith, G. J.; Powell, Stephen

    2014-01-01

    We study critical behavior in the classical cubic dimer model (CDM) in the presence of a finite density of monomers. With attractive interactions between parallel dimers, the monomer-free CDM exhibits an unconventional transition from a Coulomb phase to a dimer crystal. Monomers act as charges (or monopoles) in the Coulomb phase and, at nonzero density, lead to a standard Landau-type transition. We use large-scale Monte Carlo simulations to study the system in the neighborhood of the critical point, and find results in agreement with detailed predictions of scaling theory. Going beyond previous studies of the transition in the absence of monomers, we explicitly confirm the distinction between conventional and unconventional criticality, and quantitatively demonstrate the crossover between the two. Our results also provide additional evidence for the theoretical claim that the transition in the CDM belongs in the same universality class as the deconfined quantum critical point in the SU (2) JQ model.

  17. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  18. Predicting population survival under future climate change: density dependence, drought and extraction in an insular bighorn sheep.

    Science.gov (United States)

    Colchero, Fernando; Medellin, Rodrigo A; Clark, James S; Lee, Raymond; Katul, Gabriel G

    2009-05-01

    1. Our understanding of the interplay between density dependence, climatic perturbations, and conservation practices on the dynamics of small populations is still limited. This can result in uninformed strategies that put endangered populations at risk. Moreover, the data available for a large number of populations in such circumstances are sparse and mined with missing data. Under the current climate change scenarios, it is essential to develop appropriate inferential methods that can make use of such data sets. 2. We studied a population of desert bighorn sheep introduced to Tiburon Island, Mexico in 1975 and subjected to irregular extractions for the last 10 years. The unique attributes of this population are absence of predation and disease, thereby permitting us to explore the combined effect of density dependence, environmental variability and extraction in a 'controlled setting.' Using a combination of nonlinear discrete models with long-term field data, we constructed three basic Bayesian state space models with increasing density dependence (DD), and the same three models with the addition of summer drought effects. 3. We subsequently used Monte Carlo simulations to evaluate the combined effect of drought, DD, and increasing extractions on the probability of population survival under two climate change scenarios (based on the Intergovernmental Panel on Climate Change predictions): (i) increase in drought variability; and (ii) increase in mean drought severity. 4. The population grew from 16 individuals introduced in 1975 to close to 700 by 1993. Our results show that the population's growth was dominated by DD, with drought having a secondary but still relevant effect on its dynamics. 5. Our predictions suggest that under climate change scenario (i), extraction dominates the fate of the population, while for scenario (ii), an increase in mean drought affects the population's probability of survival in an equivalent magnitude as extractions. Thus, for the

  19. A Langevin model for low density pedestrian dynamics

    Science.gov (United States)

    Corbetta, Alessandro; Lee, Chung-Min; Benzi, Roberto; Muntean, Adrian; Toschi, Federico

    The dynamics of pedestrian crowds shares deep connections with statistical physics and fluid dynamics. Reaching a quantitative understanding, not only of the average behaviours but also of the statistics of (rare) fluctuations would have major impact, for instance, on the design and safety of civil infrastructures. A key feature of pedestrian dynamics is its strong intrinsic variability, that we can already observe at the single individual level. In this work we aim at a quantitative characterisation of this statistical variability by studying individual fluctuations. We consider experimental observations of low-density pedestrian flows in a corridor within a building at Eindhoven University of Technology. Few hundreds of thousands of pedestrian trajectories with high space and time resolutions have been collected via a Microsoft Kinect 3D-range sensor and automatic head tracking techniques. From these observations we model pedestrians as active Brownian particles by means of a generalised Langevin equation. With this model we can quantitatively reproduce the observed dynamics including the statistics of ordinary pedestrian fluctuations and of rarer U-turn events. Low density, pair-wise interactions between pedestrians are also discussed.

  20. Modeling the Void H I Column Density Spectrum

    CERN Document Server

    Manning, C V

    2003-01-01

    The equivalent width distribution function (EWDF) of \\hone absorbers specific to the void environment has been recently derived (Manning 2002), revealing a large line density of clouds (dN/dz ~500 per unit z for Log (N_HI)> 12.4). I show that the void absorbers cannot be diffuse (or so-called filamentary) clouds, expanding with the Hubble flow, as suggested by N-body/hydro simulations. Absorbers are here modeled as the baryonic remnants of sub-galactic perturbations that have expanded away from their dark halos in response to reionization at z ~ 6.5. A 1-D Lagrangian hydro/gravity code is used to follow the dynamic evolution and ionization structure of the baryonic clouds for a range of halo circular velocities. The simulation products at z=0 can be combined according to various models of the halo velocity distribution function to form a column density spectrum that can be compared with the observed. I find that such clouds may explain the observed EWDF if the halo velocity distribution function is as steep a...

  1. Modeling fence location and density at a regional scale for use in wildlife management.

    Directory of Open Access Journals (Sweden)

    Erin E Poor

    Full Text Available Barbed and woven wire fences, common structures across western North America, act as impediments to wildlife movements. In particular, fencing influences pronghorn (Antilocapra americana daily and seasonal movements, as well as modifying habitat selection. Because of fencing's impacts to pronghorn and other wildlife, it is a potentially important factor in both wildlife movement and habitat selection models. At this time, no geospatial fencing data is available at regional scales. Consequently, we constructed a regional fence model using a series of land tenure assumptions for the Hi-Line region of northern Montana--an area consisting of 13 counties over 103,400 km(2. Randomized 3.2 km long transects (n = 738 on both paved and unpaved roads were driven to collect information on habitat, fence densities and fence type. Using GIS, we constructed a fence location and a density model incorporating ownership, size, neighboring parcels, township boundaries and roads. Local knowledge of land ownership and land use assisted in improving the final models. We predict there is greater than 263,300 km of fencing in the Hi-Line region, with a maximum density of 6.8 km of fencing per km(2 and mean density of 2.4 km of fencing per km(2. Using field data to assess model accuracy, Cohen's Kappa was measured at 0.40. On-the-ground fence modification or removal could be prioritized by identifying high fence densities in critical wildlife areas such as pronghorn migratory pathways or sage grouse lekking habitat. Such novel fence data can assist wildlife and land managers to assess effects of anthropogenic features to wildlife at various scales; which in turn may help conserve declining grassland species and overall ecological functionality.

  2. Modeling fence location and density at a regional scale for use in wildlife management.

    Science.gov (United States)

    Poor, Erin E; Jakes, Andrew; Loucks, Colby; Suitor, Mike

    2014-01-01

    Barbed and woven wire fences, common structures across western North America, act as impediments to wildlife movements. In particular, fencing influences pronghorn (Antilocapra americana) daily and seasonal movements, as well as modifying habitat selection. Because of fencing's impacts to pronghorn and other wildlife, it is a potentially important factor in both wildlife movement and habitat selection models. At this time, no geospatial fencing data is available at regional scales. Consequently, we constructed a regional fence model using a series of land tenure assumptions for the Hi-Line region of northern Montana--an area consisting of 13 counties over 103,400 km(2). Randomized 3.2 km long transects (n = 738) on both paved and unpaved roads were driven to collect information on habitat, fence densities and fence type. Using GIS, we constructed a fence location and a density model incorporating ownership, size, neighboring parcels, township boundaries and roads. Local knowledge of land ownership and land use assisted in improving the final models. We predict there is greater than 263,300 km of fencing in the Hi-Line region, with a maximum density of 6.8 km of fencing per km(2) and mean density of 2.4 km of fencing per km(2). Using field data to assess model accuracy, Cohen's Kappa was measured at 0.40. On-the-ground fence modification or removal could be prioritized by identifying high fence densities in critical wildlife areas such as pronghorn migratory pathways or sage grouse lekking habitat. Such novel fence data can assist wildlife and land managers to assess effects of anthropogenic features to wildlife at various scales; which in turn may help conserve declining grassland species and overall ecological functionality.

  3. Semi-empirical thermosphere model evaluation at low altitude with GOCE densities

    Science.gov (United States)

    Bruinsma, Sean; Arnold, Daniel; Jäggi, Adrian; Sánchez-Ortiz, Noelia

    2017-02-01

    Aims: The quality of the Committee on Space Research (COSPAR) International Reference Atmosphere models NRLMSISE-00, JB2008, and DTM2013 in the 150-300 km altitude range has never been thoroughly evaluated due to a lack of good density data. This study aims at providing the model accuracies thanks to the recent high-resolution high-accuracy Gravity field and steady-state Ocean Circulation Explorer (GOCE) density dataset. The evaluation was performed on yearly, monthly, and daily time scales, which are important for different applications such as mission design, mission operation, or re-entry predictions. Methods: The accuracy of the models was evaluated by comparing to the GOCE density observations of the Science Mission (1 November 2009-20 October 2013) and new density data at the lowest altitudes derived for the last weeks before the re-entry (22 October-8 November 2013) according to a metric, which consists of computing mean, standard deviation and root mean square (RMS) of the observed-to-model ratios, and correlation. Mean statistics are then calculated over the three time scales. Results: The range of model biases, standard deviations, and correlations becomes larger when the time interval decreases, and this study provides COSPAR International Reference Atmosphere (CIRA) model statistics in the altitude range of 275-170 km. DTM2013 is the least biased and most accurate model on all time scales, essentially thanks to the database, notably containing two years of GOCE densities, to which it was fitted. NRLMSISE-00 performs worst, with considerable bias of about 20% in 2009 and 2013, and systematically higher standard deviations (lower correlations) than JB2008 and DTM2013. The performance of JB2008 is presently only slightly behind DTM2013, thanks to the new release 4_2g solar activity proxies. However, it still presents some weakness under the lowest solar activity conditions in 2009 and 2010. Comparison to Challenging Mini-Satellite Payload (CHAMP) density

  4. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  5. Can we Predict Quantum Yields Using Excited State Density Functional Theory for New Families of Fluorescent Dyes?

    Science.gov (United States)

    Kohn, Alexander W.; Lin, Zhou; Shepherd, James J.; Van Voorhis, Troy

    2016-06-01

    For a fluorescent dye, the quantum yield characterizes the efficiency of energy transfer from the absorbed light to the emitted fluorescence. In the screening among potential families of dyes, those with higher quantum yields are expected to have more advantages. From the perspective of theoreticians, an efficient prediction of the quantum yield using a universal excited state electronic structure theory is in demand but still challenging. The most representative examples for such excited state theory include time-dependent density functional theory (TDDFT) and restricted open-shell Kohn-Sham (ROKS). In the present study, we explore the possibility of predicting the quantum yields for conventional and new families of organic dyes using a combination of TDDFT and ROKS. We focus on radiative (kr) and nonradiative (knr) rates for the decay of the first singlet excited state (S_1) into the ground state (S_0) in accordance with Kasha's rule. M. Kasha, Discuss. Faraday Soc., 9, 14 (1950). For each dye compound, kr is calculated with the S_1-S_0 energy gap and transition dipole moment obtained using ROKS and TDDFT respectively at the relaxed S_1 geometry. Our predicted kr agrees well with the experimental value, so long as the order of energy levels is correctly predicted. Evaluation of knr is less straightforward as multiple processes are involved. Our study focuses on the S_1-T_1 intersystem crossing (ISC) and the S_1-S_0 internal conversion (IC): we investigate the properties that allow us to model the knr value using a Marcus-like expression, such as the Stokes shift, the reorganization energy, and the S_1-T_1 and S_1-S_0 energy gaps. Taking these factors into consideration, we compare our results with those obtained using the actual Marcus theory and provide explanation for discrepancy. T. Kowalczyk, T. Tsuchimochi, L. Top, P.-T. Chen, and T. Van Voorhis, J. Chem. Phys., 138, 164101 (2013). M. Kasha, Discuss. Faraday Soc., 9, 14 (1950).

  6. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  7. Simple predictive electron transport models applied to sawtoothing plasmas

    Science.gov (United States)

    Kim, D.; Merle, A.; Sauter, O.; Goodman, T. P.

    2016-05-01

    In this work, we introduce two simple transport models to evaluate the time evolution of electron temperature and density profiles during sawtooth cycles (i.e. over a sawtooth period time-scale). Since the aim of these simulations is to estimate reliable profiles within a short calculation time, two simplified ad-hoc models have been developed. The goal for these models is to rely on a few easy-to-check free parameters, such as the confinement time scaling factor and the profiles’ averaged scale-lengths. Due to the simplicity and short calculation time of the models, it is expected that these models can also be applied to real-time transport simulations. We show that it works well for Ohmic and EC heated L- and H-mode plasmas. The differences between these models are discussed and we show that their predictive capabilities are similar. Thus only one model is used to reproduce with simulations the results of sawtooth control experiments on the TCV tokamak. For the sawtooth pacing, the calculated time delays between the EC power off and sawtooth crash time agree well with the experimental results. The map of possible locking range is also well reproduced by the simulation.

  8. Developing fracture density models using terrestrial laser scan data

    Science.gov (United States)

    Pollyea, R.; Fairley, J. P.; Podgorney, R. K.; McLing, T. L.

    2010-12-01

    Characterizing fracture heterogeneity for subsurface flow and transport modeling has been of interest to the hydrogeologic community for many years. Currently, stochastic continuum and discrete fracture representations have come to be accepted as two of the most commonly used tools for incorporating fracture heterogeneity into subsurface flow and transport models. In this research, ground-based lidar data are used to model the surface roughness of vertical basalt exposures in the East Snake River Plain, Idaho (ESRP) as a surrogate for fracture density. The surface roughness is modeled by discretizing the dataset over a regular grid and fitting a regression plane to each gridblock. The standard deviation of distance from the block data to the regression plane is then assumed to represent a measure of roughness for each gridblock. Two-dimensional plots of surface roughness from ESRP exposures indicate discrete fractures can be quantitatively differentiated from unfractured rock at 0.25- meter resolution. This methodology may have broad applications for characterizing fracture heterogeneity. One application, demonstrated here, is to capture high resolution (low noise) covariance statistics for building stochastic property sets to be used in large scale flow simulations. Additional applications may include using surface roughness datasets as training images for multiple-point geostatistics analysis and for constraining discrete fracture models.

  9. Prediction of Bulk Density of Soils in the Loess Plateau Region of China

    Science.gov (United States)

    Wang, Yunqiang; Shao, Ming'an; Liu, Zhipeng; Zhang, Chencheng

    2013-08-01

    Soil bulk density (BD) is a key soil physical property that may affect the transport of water and solutes and is essential to estimate soil carbon/nutrients reserves. However, BD data are often lacking in soil databases due to the challenge of directly measuring BD, which is considered to be labor intensive, time consuming, and expensive especially for the lower layers of deep soils such as those of the Chinese Loess Plateau region. We determined the factors that were closely correlated with BD at the regional scale and developed a robust pedotransfer function (PTF) for BD by measuring BD and potentially related soil and environmental factors at 748 selected sites across the Loess Plateau of China (620,000 km2) at which we collected undisturbed and disturbed soil samples from two soil layers (0-5 and 20-25 cm). Regional BD values were normally distributed and demonstrated weak spatial variation (CV = 12 %). Pearson's correlation and stepwise multiple linear regression analyses identified silt content, slope gradient (SG), soil organic carbon content (SOC), clay content, slope aspect (SA), and altitude as the factors that were closely correlated with BD and that explained 25.8, 6.3, 5.8, 1.4, 0.3, and 0.3 % of the BD variation, respectively. Based on these closely correlated variables, a reasonably robust PTF was developed for BD using multiple linear regression, which performed equally with the artificial neural network method in the current study. The inclusion of topographic factors significantly improved the predictive capability of the BD PTF and in which SG was an important input variable that could be used in place of SA and altitude without compromising its capability for predicting BD. Thus, the developed PTF with only four input variables (clay, silt, SOC, SG), including their common transformations and interactive terms, predicted BD with reasonable accuracy and is thus useful for most applications on the Loess Plateau of China. More attention should be

  10. An improved statistical analysis for predicting the critical temperature and critical density with Gibbs ensemble Monte Carlo simulation.

    Science.gov (United States)

    Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent

    2015-09-14

    A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.

  11. Saltation-threshold model can explain aeolian features on low-air-density planetary bodies

    CERN Document Server

    Pähtz, Thomas

    2016-01-01

    Knowledge of the minimal fluid speeds at which sediment transport can be sustained is crucial for understanding whether underwater landscapes exposed to water streams and wind-blown loose planetary surfaces can be altered. It also tells us whether surface features, such as ripples and dunes, can evolve. Here, guided by state-of-the-art numerical simulations, we propose an analytical model predicting the minimal fluid speeds required to sustain sediment transport in a Newtonian fluid. The model results are consistent with measurements and estimates of the transport threshold in water and Earth's and Mars' atmospheres. Furthermore, it predicts reasonable wind speeds to sustain aeolian sediment transport ("saltation") on the low-air-density planetary bodies Triton, Pluto, and 67P/Churyumov-Gerasimenko (comet). This offers an explanation for possible aeolian surface features photographed on these bodies during space missions.

  12. Predicting critical temperatures of iron(II) spin crossover materials: density functional theory plus U approach.

    Science.gov (United States)

    Zhang, Yachao

    2014-12-07

    A first-principles study of critical temperatures (T(c)) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T(c) of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE(HL) and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T(c) by exploiting the ΔH/T - T and ΔS - T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T(c) of the two phases. This study shows the applicability of the DFT+U approach for predicting T(c) of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  13. Predictive microbiology models vs. modeling microbial growth within Listeria monocytogenes risk assessment: what parameters matter and why.

    Science.gov (United States)

    Pouillot, Régis; Lubran, Meryl B

    2011-06-01

    Predictive microbiology models are essential tools to model bacterial growth in quantitative microbial risk assessments. Various predictive microbiology models and sets of parameters are available: it is of interest to understand the consequences of the choice of the growth model on the risk assessment outputs. Thus, an exercise was conducted to explore the impact of the use of several published models to predict Listeria monocytogenes growth during food storage in a product that permits growth. Results underline a gap between the most studied factors in predictive microbiology modeling (lag, growth rate) and the most influential parameters on the estimated risk of listeriosis in this scenario (maximum population density, bacterial competition). The mathematical properties of an exponential dose-response model for Listeria accounts for the fact that the mean number of bacteria per serving and, as a consequence, the highest achievable concentrations in the product under study, has a strong influence on the estimated expected number of listeriosis cases in this context.

  14. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  15. Human latent inhibition and the density of predictive relationships in the context in which the target stimulus occurs.

    Science.gov (United States)

    Rodríguez, Gabriel; Hall, Geoffrey

    2017-04-01

    In two experiments, participants were exposed to a listing of actions performed by a fictitious Mr. X, over three days of his life. For most of his actions an outcome was described, but some were not followed by any outcome. On Day 3, Mr. X performed an action (the target action) that was followed by a novel outcome. For participants in the control condition, the target action that preceded the appearance of this outcome was also novel; for participants in the latent inhibition (LI) condition, Mr. X had performed the target action on repeated occasions during Days 2 and 3, without it producing any outcome. All the participants were tested on their ability to retrieve the action performed by Mr. X prior to the target outcome. In Experiment 1, retrieval of the target action (indicating a less effective target action-outcome association) was poorer in the LI than in the control condition. In Experiment 2, reducing the proportion (the density) of nontarget actions that brought outcomes during initial training was found to reduce the size of the LI effect. These results are predicted by the account of LI put forward previously [Hall, G., & Rodríguez, G. (2010). Associative and nonassociative processes in latent inhibition: An elaboration of the Pearce-Hall model. In R. E. Lubow & I. Weiner (Eds.), Latent inhibition: Data, theories, and applications to schizophrenia (pp. 114-136). Cambridge, England: Cambridge University Press]. A high density of predictive relationships ensures strong activation of the expectancy that some outcome will occur when the target action is first presented; this facilitates the formation of a target action-no-event association during training in the LI condition, thus enhancing the LI effect.

  16. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    Science.gov (United States)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  17. Conifer density within lake catchments predicts fish mercury concentrations in remote subalpine lakes

    Science.gov (United States)

    Eagles-Smith, Collin A.; Herring, Garth; Johnson, Branden L.; Graw, Rick

    2016-01-01

    Remote high-elevation lakes represent unique environments for evaluating the bioaccumulation of atmospherically deposited mercury through freshwater food webs, as well as for evaluating the relative importance of mercury loading versus landscape influences on mercury bioaccumulation. The increase in mercury deposition to these systems over the past century, coupled with their limited exposure to direct anthropogenic disturbance make them useful indicators for estimating how changes in mercury emissions may propagate to changes in Hg bioaccumulation and ecological risk. We evaluated mercury concentrations in resident fish from 28 high-elevation, sub-alpine lakes in the Pacific Northwest region of the United States. Fish total mercury (THg) concentrations ranged from 4 to 438 ng/g wet weight, with a geometric mean concentration (±standard error) of 43 ± 2 ng/g ww. Fish THg concentrations were negatively correlated with relative condition factor, indicating that faster growing fish that are in better condition have lower THg concentrations. Across the 28 study lakes, mean THg concentrations of resident salmonid fishes varied as much as 18-fold among lakes. We used a hierarchal statistical approach to evaluate the relative importance of physiological, limnological, and catchment drivers of fish Hg concentrations. Our top statistical model explained 87% of the variability in fish THg concentrations among lakes with four key landscape and limnological variables: catchment conifer density (basal area of conifers within a lake's catchment), lake surface area, aqueous dissolved sulfate, and dissolved organic carbon. Conifer density within a lake's catchment was the most important variable explaining fish THg concentrations across lakes, with THg concentrations differing by more than 400 percent across the forest density spectrum. These results illustrate the importance of landscape characteristics in controlling mercury bioaccumulation in fish.

  18. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  19. Density-dependent electron transport and precise modeling of GaN high electron mobility transistors

    Energy Technology Data Exchange (ETDEWEB)

    Bajaj, Sanyam, E-mail: bajaj.10@osu.edu; Shoron, Omor F.; Park, Pil Sung; Krishnamoorthy, Sriram; Akyol, Fatih; Hung, Ting-Hsiang [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio 43210 (United States); Reza, Shahed; Chumbes, Eduardo M. [Raytheon Integrated Defense Systems, Andover, Massachusetts 01810 (United States); Khurgin, Jacob [Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218 (United States); Rajan, Siddharth [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio 43210 (United States); Department of Material Science and Engineering, The Ohio State University, Columbus, Ohio 43210 (United States)

    2015-10-12

    We report on the direct measurement of two-dimensional sheet charge density dependence of electron transport in AlGaN/GaN high electron mobility transistors (HEMTs). Pulsed IV measurements established increasing electron velocities with decreasing sheet charge densities, resulting in saturation velocity of 1.9 × 10{sup 7 }cm/s at a low sheet charge density of 7.8 × 10{sup 11 }cm{sup −2}. An optical phonon emission-based electron velocity model for GaN is also presented. It accommodates stimulated longitudinal optical (LO) phonon emission which clamps the electron velocity with strong electron-phonon interaction and long LO phonon lifetime in GaN. A comparison with the measured density-dependent saturation velocity shows that it captures the dependence rather well. Finally, the experimental result is applied in TCAD-based device simulator to predict DC and small signal characteristics of a reported GaN HEMT. Good agreement between the simulated and reported experimental results validated the measurement presented in this report and established accurate modeling of GaN HEMTs.

  20. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  1. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  2. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  3. Density functional theory predictions of the composition of atomic layer deposition-grown ternary oxides.

    Science.gov (United States)

    Murray, Ciaran; Elliott, Simon D

    2013-05-01

    The surface reactivity of various metal precursors with different alkoxide, amide, and alkyl ligands during the atomic layer deposition (ALD) of ternary oxides was determined using simplified theoretical models. Quantum chemical estimations of the Brønsted reactivity of a metal complex precursor at a hydroxylated surface are made using a gas-phase hydrolysis model. The geometry optimized structures and energies for a large suite of 17 metal precursors (including cations of Mg, Ca, Sr, Sc, Y, La, Ti, Zr, Cr, Mn, Fe, Co, Ni, Cu, Zn, Al, and Ga) with five different anionic ligands (conjugate bases of tert-butanol, tetramethyl heptanedione, dimethyl amine, isopropyl amidine, and methane) and the corresponding hydrolyzed complexes are calculated using density functional theory (DFT) methods. The theoretically computed energies are used to determine the energetics of the model reactions. These DFT models of hydrolysis are used to successfully explain the reactivity and resulting stoichiometry in terms of metal cation ratios seen experimentally for a variety of ALD-grown ternary oxide systems.

  4. Fluid and gyrokinetic modelling of particle transport in plasmas with hollow density profiles

    Science.gov (United States)

    Tegnered, D.; Oberparleiter, M.; Nordman, H.; Strand, P.

    2016-11-01

    Hollow density profiles occur in connection with pellet fuelling and L to H transitions. A positive density gradient could potentially stabilize the turbulence or change the relation between convective and diffusive fluxes, thereby reducing the turbulent transport of particles towards the center, making the fuelling scheme inefficient. In the present work, the particle transport driven by ITG/TE mode turbulence in regions of hollow density profiles is studied by fluid as well as gyrokinetic simulations. The fluid model used, an extended version of the Weiland transport model, Extended Drift Wave Model (EDWM), incorporates an arbitrary number of ion species in a multi-fluid description, and an extended wavelength spectrum. The fluid model, which is fast and hence suitable for use in predictive simulations, is compared to gyrokinetic simulations using the code GENE. Typical tokamak parameters are used based on the Cyclone Base Case. Parameter scans in key plasma parameters like plasma β, R/LT , and magnetic shear are investigated. It is found that β in particular has a stabilizing effect in the negative R/Ln region, both nonlinear GENE and EDWM show a decrease in inward flux for negative R/Ln and a change of direction from inward to outward for positive R/Ln . This might have serious consequences for pellet fuelling of high β plasmas.

  5. Atomic density functional and diagram of structures in the phase field crystal model

    Science.gov (United States)

    Ankudinov, V. E.; Galenko, P. K.; Kropotin, N. V.; Krivilyov, M. D.

    2016-02-01

    The phase field crystal model provides a continual description of the atomic density over the diffusion time of reactions. We consider a homogeneous structure (liquid) and a perfect periodic crystal, which are constructed from the one-mode approximation of the phase field crystal model. A diagram of 2D structures is constructed from the analytic solutions of the model using atomic density functionals. The diagram predicts equilibrium atomic configurations for transitions from the metastable state and includes the domains of existence of homogeneous, triangular, and striped structures corresponding to a liquid, a body-centered cubic crystal, and a longitudinal cross section of cylindrical tubes. The method developed here is employed for constructing the diagram for the homogeneous liquid phase and the body-centered iron lattice. The expression for the free energy is derived analytically from density functional theory. The specific features of approximating the phase field crystal model are compared with the approximations and conclusions of the weak crystallization and 2D melting theories.

  6. Vertical variation of particle speed and flux density in aeolian saltation: Measurement and modeling

    Science.gov (United States)

    Rasmussen, Keld R.; SøRensen, Michael

    2008-06-01

    Particle dynamics in aeolian saltation has been studied in a boundary layer wind tunnel above beds composed of quartz grains having diameters of either 242 μm or 320 μm. The cross section of the tunnel is 600 mm × 900 mm, and its thick boundary layer allows precise estimation of the fluid friction speed. Saltation is modeled using a numerical saltation model, and predicted grain speeds agree fairly well with experimental results obtained from laser-Doppler anemometry. The use of laser-Doppler anemometry to study aeolian saltation is thoroughly discussed and some pitfalls are identified. At 80 mm height the ratio between air speed and grain speed is about 1.1 and from there it increases toward the bed so that at 5 mm it is about 2.0. All grain speed profiles converge toward a common value of about 1 m/s at 2-3 mm height. Moreover, the estimated launch velocity distributions depend only very weakly on the friction speed in contrast to what has often been assumed in the literature. Flux density profiles measured with a laser-Doppler appear to be similar to most other density profiles measured with vertical array compartment traps; that is, two exponential segments will fit data between heights from a few millimeters to 100-200 mm. The experimental flux density profiles are found to agree well with model predictions. Generally, validation rates are low from 30 to 50% except at the highest level of 80 mm, where they approach 80%. When flux density profiles based on the validated data are used to estimate the total mass transport rate results are in fair agreement with measured transport rates except for conditions near threshold where as much as 50% difference is observed.

  7. Triglycerides to High-Density Lipoprotein Cholesterol Ratio Can Predict Impaired Glucose Tolerance in Young Women with Polycystic Ovary Syndrome.

    Science.gov (United States)

    Song, Do Kyeong; Lee, Hyejin; Sung, Yeon Ah; Oh, Jee Young

    2016-11-01

    The triglycerides to high-density lipoprotein cholesterol (TG/HDL-C) ratio could be related to insulin resistance (IR). We previously reported that Korean women with polycystic ovary syndrome (PCOS) had a high prevalence of impaired glucose tolerance (IGT). We aimed to determine the cutoff value of the TG/HDL-C ratio for predicting IR and to examine whether the TG/HDL-C ratio is useful for identifying individuals at risk of IGT in young Korean women with PCOS. We recruited 450 women with PCOS (24±5 yrs) and performed a 75-g oral glucose tolerance test (OGTT). IR was assessed by a homeostasis model assessment index over that of the 95th percentile of regular-cycling women who served as the controls (n=450, 24±4 yrs). The cutoff value of the TG/HDL-C ratio for predicting IR was 2.5 in women with PCOS. Among the women with PCOS who had normal fasting glucose (NFG), the prevalence of IGT was significantly higher in the women with PCOS who had a high TG/HDL-C ratio compared with those with a low TG/HDL-C ratio (15.6% vs. 5.6%, p2.5 are recommended to be administered an OGTT to detect IGT even if they have NFG.

  8. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast‑growing Eucalyptus forest plantation using airborne LiDAR data

    Science.gov (United States)

    Carlos Alberto Silva; Andrew Thomas Hudak; Carine Klauberg; Lee Alexandre Vierling; Carlos Gonzalez‑Benecke; Samuel de Padua Chaves Carvalho; Luiz Carlos Estraviz Rodriguez; Adrian Cardil

    2017-01-01

    LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m− 2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations...

  9. A mathematical model of the maximum power density attainable in an alkaline hydrogen/oxygen fuel cell

    Science.gov (United States)

    Kimble, Michael C.; White, Ralph E.

    1991-01-01

    A mathematical model of a hydrogen/oxygen alkaline fuel cell is presented that can be used to predict the polarization behavior under various power loads. The major limitations to achieving high power densities are indicated and methods to increase the maximum attainable power density are suggested. The alkaline fuel cell model describes the phenomena occurring in the solid, liquid, and gaseous phases of the anode, separator, and cathode regions based on porous electrode theory applied to three phases. Fundamental equations of chemical engineering that describe conservation of mass and charge, species transport, and kinetic phenomena are used to develop the model by treating all phases as a homogeneous continuum.

  10. Dynamic density functional theory for nucleation: Non-classical predictions of mesoscopic nucleation theory

    Science.gov (United States)

    Duran-Olivencia, Miguel A.; Yatsyshin, Peter; Lutsko, James F.; Kalliadasis, Serafim

    2016-11-01

    Classical density functional theory (DFT) for fluids and its dynamic extension (DDFT) provide an appealing mean-field framework for describing equilibrium and dynamics of complex soft matter systems. For a long time, homogeneous nucleation was considered to be outside the limits of applicability of DDFT. However, our recently developed mesoscopic nucleation theory (MeNT) based on fluctuating hydrodynamics, reconciles the inherent randomness of the nucleation process with the deterministic nature of DDFT. It turns out that in the weak-noise limit, the most likely path (MLP) for nucleation to occur is determined by the DDFT equations. We present computations of MLPs for homogeneous and heterogeneous nucleation in colloidal suspensions. For homogeneous nucleation, the MLP obtained is in excellent agreement with the reduced order-parameter description of MeNT, which predicts a multistage nucleation pathway. For heterogeneous nucleation, the presence of impurities in the fluid affects the MLP, but remarkably, the overall qualitative picture of homogeneous nucleation persists. Finally, we highlight the use of DDFT as a simulation tool, which is especially appealing as there are no known applications of MeNT to heterogeneous nucleation. We acknowledge financial support from the European Research Council via Advanced Grant No. 247031 and from EPSRC via Grants No. EP/L020564 and EP/L025159.

  11. Predicting a quaternary tungsten oxide for sustainable photovoltaic application by density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Sarker, Pranab; Huda, Muhammad N., E-mail: huda@uta.edu [Department of Physics, University of Texas at Arlington, Arlington, Texas 76019 (United States); Al-Jassim, Mowafak M. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States)

    2015-12-07

    A quaternary oxide, CuSnW{sub 2}O{sub 8} (CTTO), has been predicted by density functional theory (DFT) to be a suitable material for sustainable photovoltaic applications. CTTO possesses band gaps of 1.25 eV (indirect) and 1.37 eV (direct), which were evaluated using the hybrid functional (HSE06) as a post-DFT method. The hole mobility of CTTO was higher than that of silicon. Further, optical absorption calculations demonstrate that CTTO is a better absorber of sunlight than Cu{sub 2}ZnSnS{sub 4} and CuIn{sub x}Ga{sub 1−x}Se{sub 2} (x = 0.5). In addition, CTTO exhibits rigorous thermodynamic stability comparable to WO{sub 3}, as investigated by different thermodynamic approaches such as bonding cohesion, fragmentation tendency, and chemical potential analysis. Chemical potential analysis further revealed that CTTO can be synthesized at flexible experimental growth conditions, although the co-existence of at least one secondary phase is likely. Finally, like other Cu-based compounds, the formation of Cu vacancies is highly probable, even at Cu-rich growth condition, which could introduce p-type activity in CTTO.

  12. Monocyte/high-density lipoprotein ratio predicts the mortality in ischemic stroke patients.

    Science.gov (United States)

    Bolayir, Asli; Gokce, Seyda Figul; Cigdem, Burhanettin; Bolayir, Hasan Ata; Yildiz, Ozlem Kayim; Bolayir, Ertugrul; Topaktas, Suat Ahmet

    2017-08-24

    The inflammatory process is a very important stage in the development and prognosis of acute ischemic stroke (AIS). The monocyte to high-density lipoprotein (HDL) ratio (MHR) is accepted as a novel marker for demonstrating inflammation. However, the role of MHR as a predictor of mortality in patients with AIS remains unclear. We retrospectively enrolled 466 patients who were referred to our clinic within the first 24hours of symptom presentation and who were diagnosed with AIS between January 2008 and June 2016. Four hundred and eight controls of similar age and gender were also included. The patient group was classified into two groups according to 30-day mortality. The groups were compared in terms of monocyte counts, HDL, and MHR values. The patient group had significantly higher monocyte counts and lower HDL levels; therefore, this group had higher values of MHR compared to controls. Additionally, the monocyte count and MHR value were higher, and the HDL level was lower in non-surviving patients (pMHR value was also observed as a significant independent variable of 30-day mortality in patients with AIS (pMHR in predicting the 30-day mortality for patients with AIS was 17.52 (95% CI 0.95-0.98). Our study demonstrated that a high MHR value is an independent predictor of 30-day mortality in patients with AIS. Copyright © 2017 Polish Neurological Society. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.

  13. Critical state model with anisotropic critical current density

    CERN Document Server

    Bhagwat, K V; Ravikumar, G

    2003-01-01

    Analytical solutions of Bean's critical state model with critical current density J sub c being anisotropic are obtained for superconducting cylindrical samples of arbitrary cross section in a parallel geometry. We present a method for calculating the flux fronts and magnetization curves. Results are presented for cylinders with elliptical cross section with a specific form of the anisotropy. We find that over a certain range of the anisotropy parameter the flux fronts have shapes similar to those for an isotropic sample. However, in general, the presence of anisotropy significantly modifies the shape of the flux fronts. The field for full flux penetration also depends on the anisotropy parameter. The method is extended to the case of anisotropic J sub c that also depends on the local field B, and magnetization hysteresis curves are presented for typical values of the anisotropy parameter for the case of |J sub c | that decreases exponentially with |B|.

  14. Element-specific density profiles in interacting biomembrane models

    Science.gov (United States)

    Schneck, Emanuel; Rodriguez-Loureiro, Ignacio; Bertinetti, Luca; Marin, Egor; Novikov, Dmitri; Konovalov, Oleg; Gochev, Georgi

    2017-03-01

    Surface interactions involving biomembranes, such as cell–cell interactions or membrane contacts inside cells play important roles in numerous biological processes. Structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. Here, we work with simplified planar experimental models of membrane surfaces, composed of lipids and lipopolymers. Their interaction is quantified in terms of pressure–distance curves using ellipsometry at controlled dehydrating (interaction) pressures. For selected pressures, their internal structure is investigated by standing-wave x-ray fluorescence (SWXF). This technique yields specific density profiles of the chemical elements P and S belonging to lipid headgroups and polymer chains, as well as counter-ion profiles for charged surfaces.

  15. Comparison of different gravity field implied density models of the topography

    Science.gov (United States)

    Sedighi, Morteza; Tabatabaee, Seied; Najafi-Alamdari, Mehdi

    2009-06-01

    Density within the Earth crust varies between 1.0 and 3.0 g/cm3. The Bouguer gravity field measured in south Iran is analyzed using four different regional-residual separation techniques to obtain a residual map of the gravity field suitable for density modeling of topography. A density model of topography with radial and lateral distribution of density is required for an accurate determination of the geoid, e.g., in the Stokes-Helmert approach. The apparent density mapping technique is used to convert the four residual Bouguer anomaly fields into the corresponding four gravity im-plied subsurface density (GRADEN) models. Although all four density models showed good correlation with the geological density (GEODEN) model of the region, the GRADEN models obtained by high-pass filter-ing and GGM high-pass filtering show better numerical correlation with GEODEN model than the other models.

  16. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  17. Prediction of d^0 magnetism in self-interaction corrected density functional theory

    Science.gov (United States)

    Das Pemmaraju, Chaitanya

    2010-03-01

    Over the past couple of years, the phenomenon of ``d^0 magnetism'' has greatly intrigued the magnetism community [1]. Unlike conventional magnetic materials, ``d^0 magnets'' lack any magnetic ions with open d or f shells but surprisingly, exhibit signatures of ferromagnetism often with a Curie temperature exceeding 300 K. Current research in the field is geared towards trying to understand the mechanism underlying this observed ferromagnetism which is difficult to explain within the conventional m-J paradigm [1]. The most widely studied class of d^0 materials are un-doped and light element doped wide gap Oxides such as HfO2, MgO, ZnO, TiO2 all of which have been put forward as possible d0 ferromagnets. General experimental trends suggest that the magnetism is a feature of highly defective samples leading to the expectation that the phenomenon must be defect related. In particular, based on density functional theory (DFT) calculations acceptor defects formed from the O-2p states in these Oxides have been proposed as being responsible for the ferromagnetism [2,3]. However. predicting magnetism originating from 2p orbitals is a delicate problem, which depends on the subtle interplay between covalency and Hund's coupling. DFT calculations based on semi-local functionals such as the local spin-density approximation (LSDA) can lead to qualitative failures on several fronts. On one hand the excessive delocalization of spin-polarized holes leads to half-metallic ground states and the expectation of room-temperature ferromagnetism. On the other hand, in some cases a magnetic ground state may not be predicted at all as the Hund's coupling might be under estimated. Furthermore, polaronic distortions which are often a feature of acceptor defects in Oxides are not predicted [4,5]. In this presentation, we argue that the self interaction error (SIE) inherent to semi-local functionals is responsible for the failures of LSDA and demonstrate through various examples that beyond

  18. Matter density perturbation and power spectrum in running vacuum model

    Science.gov (United States)

    Geng, Chao-Qiang; Lee, Chung-Chi

    2016-10-01

    We investigate the matter density perturbation δm and power spectrum P(k) in the running vacuum model (RVM) with the cosmological constant being a function of the Hubble parameter, given by Λ = Λ0 + 6σHH0 + 3νH2, in which the linear and quadratic terms of H would originate from the QCD vacuum condensation and cosmological renormalization group, respectively. Taking the dark energy perturbation into consideration, we derive the evolution equation for δm and find a specific scale dcr = 2π/kcr, which divides the evolution of the universe into the sub and super-interaction regimes, corresponding to k ≪ kcr and k ≫ kcr, respectively. For the former, the evolution of δm has the same behavior as that in the ΛCDM model, while for the latter, the growth of δm is frozen (greatly enhanced) when ν + σ > ( matter and dark energy. It is clear that the observational data rule out the cases with ν < 0 and ν + σ < 0, while the allowed window for the model parameters is extremely narrow with ν , |σ | ≲ {O}(10^{-7}).

  19. Prediction of the derivative discontinuity in density functional theory from an electrostatic description of the exchange and correlation potential

    CERN Document Server

    Andrade, Xavier

    2011-01-01

    We propose a new approach to approximate the exchange and correlation (XC) functional in density functional theory. The XC potential is considered as an electrostatic potential, generated by a fictitious XC density, which is in turn a functional of the electronic density. We apply the approach to develop a correction scheme that fixes the asymptotic behavior of any approximated XC potential for finite systems. Additionally, the correction procedure gives the value of the derivative discontinuity; therefore it can directly predict the fundamental gap as a ground-state property.

  20. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  1. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  2. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  3. Accurate Modeling of Organic Molecular Crystals by Dispersion-Corrected Density Functional Tight Binding (DFTB).

    Science.gov (United States)

    Brandenburg, Jan Gerit; Grimme, Stefan

    2014-06-05

    The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.

  4. Moving Towards Dynamic Ocean Management: How Well Do Modeled Ocean Products Predict Species Distributions?

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Becker

    2016-02-01

    Full Text Available Species distribution models are now widely used in conservation and management to predict suitable habitat for protected marine species. The primary sources of dynamic habitat data have been in situ and remotely sensed oceanic variables (both are considered “measured data”, but now ocean models can provide historical estimates and forecast predictions of relevant habitat variables such as temperature, salinity, and mixed layer depth. To assess the performance of modeled ocean data in species distribution models, we present a case study for cetaceans that compares models based on output from a data assimilative implementation of the Regional Ocean Modeling System (ROMS to those based on measured data. Specifically, we used seven years of cetacean line-transect survey data collected between 1991 and 2009 to develop predictive habitat-based models of cetacean density for 11 species in the California Current Ecosystem. Two different generalized additive models were compared: one built with a full suite of ROMS output and another built with a full suite of measured data. Model performance was assessed using the percentage of explained deviance, root mean squared error (RMSE, observed to predicted density ratios, and visual inspection of predicted and observed distributions. Predicted distribution patterns were similar for models using ROMS output and measured data, and showed good concordance between observed sightings and model predictions. Quantitative measures of predictive ability were also similar between model types, and RMSE values were almost identical. The overall demonstrated success of the ROMS-based models opens new opportunities for dynamic species management and biodiversity monitoring because ROMS output is available in near real time and can be forecast.

  5. Investigation of density-dependent gas advection of trichloroethylene: Experiment and a model validation exercise

    Science.gov (United States)

    Lenhard, R. J.; Oostrom, M.; Simmons, C. S.; White, M. D.

    1995-07-01

    An experiment was conducted to evaluate whether vapor-density effects are significant in transporting volatile organic compounds (VOC's) with high vapor pressure and molecular mass through the subsurface. Trichloroethylene (TCE) was chosen for the investigation because it is a common VOC contaminant with high vapor pressure and molecular mass. For the investigation, a 2-m-long by 1-m-high by 7.5-cm-thick flow cell was constructed with a network of sampling ports. The flow cell was packed with sand, and a water table was established near the lower boundary. Liquid TCE was placed near the upper boundary of the flow cell in a chamber from which vapors could enter and migrate through the sand. TCE concentrations in the gas phase were measured by extracting 25-μl gas samples with an air-tight syringe and analyzing them with a gas chromatograph. The evolution of the TCE gas plume in the sand was investigated by examining plots of TCE concentrations over the domain for specific times and for particular locations as a function of time. To help in this analysis, a numerical model was developed that can predict the simultaneous movements of a gas, a nonaqueous liquid and water in porous media. The model also considers interphase mass transfer by employing the phase equilibrium assumption. The model was tested with one- and two-dimensional analytical solutions of fluid flow before it was used to simulate the experiment. Comparisons between experimental data and simulation results when vapor-density effects are considered were very good. When vapor-density effects were ignored, agreement was poor. These analyses suggest that vapor-density effects should be considered and that density-driven vapor advection may be an important mechanism for moving VOC's with high vapor pressures and molecular mass through the subsurface.

  6. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast-growing Eucalyptus forest plantation using airborne LiDAR data.

    Science.gov (United States)

    Silva, Carlos Alberto; Hudak, Andrew Thomas; Klauberg, Carine; Vierling, Lee Alexandre; Gonzalez-Benecke, Carlos; de Padua Chaves Carvalho, Samuel; Rodriguez, Luiz Carlos Estraviz; Cardil, Adrián

    2017-12-01

    LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m(-2) (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m. The results show that LiDAR pulse density of 5 pulses m(-2) provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m(-2) in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system. LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m(-2) and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.

  7. Regional-scale Predictions of Agricultural N Losses in an Area with a High Livestock Density

    Directory of Open Access Journals (Sweden)

    Carlo Grignani

    2011-02-01

    Full Text Available The quantification of the N losses in territories characterised by intensive animal stocking is of primary importance. The development of simulation models coupled to a GIS, or of simple environmental indicators, is strategic to suggest the best specific management practices. The aims of this work were: a to couple a GIS to a simulation model in order to predict N losses; b to estimate leaching and gaseous N losses from a territory with intensive livestock farming; c to derive a simplified empirical metamodel from the model output that could be used to rank the relative importance of the variables which influence N losses and to extend the results to homogeneous situations. The work was carried out in a 7773 ha area in the Western Po plain in Italy. This area was chosen because it is characterised by intensive animal husbandry and might soon be included in the nitrate vulnerable zones. The high N load, the shallow water table and the coarse type of sub-soil sediments contribute to the vulnerability to N leaching. A CropSyst simulation model was coupled to a GIS, to account for the soil surface N budget. A linear multiple regression approach was used to describe the influence of a series of independent variables on the N leaching, the N gaseous losses (including volatilisation and denitrification and on the sum of the two. Despite the fact that the available GIS was very detailed, a great deal of information necessary to run the model was lacking. Further soil measurements concerning soil hydrology, soil nitrate content and water table depth proved very valuable to integrate the data contained in the GIS in order to produce reliable input for the model. The results showed that the soils influence both the quantity and the pathways of the N losses to a great extent. The ratio between the N losses and the N supplied varied between 20 and 38%. The metamodel shows that manure input always played the most important role in determining the N losses

  8. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  9. Progress towards a more predictive model for hohlraum radiation drive and symmetry

    Science.gov (United States)

    Jones, O. S.; Suter, L. J.; Scott, H. A.; Barrios, M. A.; Farmer, W. A.; Hansen, S. B.; Liedahl, D. A.; Mauche, C. W.; Moore, A. S.; Rosen, M. D.; Salmonson, J. D.; Strozzi, D. J.; Thomas, C. A.; Turnbull, D. P.

    2017-05-01

    For several years, we have been calculating the radiation drive in laser-heated gold hohlraums using flux-limited heat transport with a limiter of 0.15, tabulated values of local thermodynamic equilibrium gold opacity, and an approximate model for not in a local thermodynamic equilibrium (NLTE) gold emissivity (DCA_2010). This model has been successful in predicting the radiation drive in vacuum hohlraums, but for gas-filled hohlraums used to drive capsule implosions, the model consistently predicts too much drive and capsule bang times earlier than measured. In this work, we introduce a new model that brings the calculated bang time into better agreement with the measured bang time. The new model employs (1) a numerical grid that is fully converged in space, energy, and time, (2) a modified approximate NLTE model that includes more physics and is in better agreement with more detailed offline emissivity models, and (3) a reduced flux limiter value of 0.03. We applied this model to gas-filled hohlraum experiments using high density carbon and plastic ablator capsules that had hohlraum He fill gas densities ranging from 0.06 to 1.6 mg/cc and hohlraum diameters of 5.75 or 6.72 mm. The new model predicts bang times to within ±100 ps for most experiments with low to intermediate fill densities (up to 0.85 mg/cc). This model predicts higher temperatures in the plasma than the old model and also predicts that at higher gas fill densities, a significant amount of inner beam laser energy escapes the hohlraum through the opposite laser entrance hole.

  10. Predicting local dengue transmission in Guangzhou, China, through the influence of imported cases, mosquito density and climate variability.

    Directory of Open Access Journals (Sweden)

    Shaowei Sang

    Full Text Available Each year there are approximately 390 million dengue infections worldwide. Weather variables have a significant impact on the transmission of Dengue Fever (DF, a mosquito borne viral disease. DF in mainland China is characterized as an imported disease. Hence it is necessary to explore the roles of imported cases, mosquito density and climate variability in dengue transmission in China. The study was to identify the relationship between dengue occurrence and possible risk factors and to develop a predicting model for dengue's control and prevention purpose.Three traditional suburbs and one district with an international airport in Guangzhou city were selected as the study areas. Autocorrelation and cross-correlation analysis were used to perform univariate analysis to identify possible risk factors, with relevant lagged effects, associated with local dengue cases. Principal component analysis (PCA was applied to extract principal components and PCA score was used to represent the original variables to reduce multi-collinearity. Combining the univariate analysis and prior knowledge, time-series Poisson regression analysis was conducted to quantify the relationship between weather variables, Breteau Index, imported DF cases and the local dengue transmission in Guangzhou, China. The goodness-of-fit of the constructed model was determined by pseudo-R2, Akaike information criterion (AIC and residual test. There were a total of 707 notified local DF cases from March 2006 to December 2012, with a seasonal distribution from August to November. There were a total of 65 notified imported DF cases from 20 countries, with forty-six cases (70.8% imported from Southeast Asia. The model showed that local DF cases were positively associated with mosquito density, imported cases, temperature, precipitation, vapour pressure and minimum relative humidity, whilst being negatively associated with air pressure, with different time lags.Imported DF cases and mosquito

  11. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  12. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  13. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  14. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  15. Matter density perturbation and power spectrum in running vacuum model

    Science.gov (United States)

    Geng, Chao-Qiang; Lee, Chung-Chi

    2017-01-01

    We investigate the matter density perturbation δm and power spectrum P(k) in the running vacuum model, with the cosmological constant being a function of the Hubble parameter, given by Λ = Λ0 + 6σHH0 + 3νH2, in which the linear and quadratic terms of H would originate from the QCD vacuum condensation and cosmological renormalization group, respectively. Taking the dark energy perturbation into consideration, we derive the evolution equation for δm and find a specific scale dcr = 2π/kcr, which divides the evolution of the universe into the sub-interaction and super-interaction regimes, corresponding to k ≪ kcr and k ≫ kcr, respectively. For the former, the evolution of δm has the same behaviour as that in the Λ cold dark model, while for the latter, the growth of δm is frozen (greatly enhanced) when ν + σ > (extremely narrow with ν , |σ | ≲ O(10^{-7}).

  16. Spatially-explicit models of global tree density

    Science.gov (United States)

    Glick, Henry B.; Bettigole, Charlie; Maynard, Daniel S.; Covey, Kristofer R.; Smith, Jeffrey R.; Crowther, Thomas W.

    2016-08-01

    Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied on over 420,000 forest inventory field plots from around the world. This research is the result of a collaborative effort engaging over 20 scientists and institutions, and capitalizes on an array of analytical strategies. Our spatial data products offer precise estimates of the number of trees at global and biome scales, but should not be used for local-level estimation. At larger scales, these datasets can contribute valuable insight into resource management, ecological modelling efforts, and the quantification of ecosystem services.

  17. A unified model of density limit in fusion plasmas

    Science.gov (United States)

    Zanca, P.; Sattin, F.; Escande, D. F.; Pucella, G.; Tudisco, O.

    2017-05-01

    In this work we identify by analytical and numerical means the conditions for the existence of a magnetic and thermal equilibrium of a cylindrical plasma, in the presence of Ohmic and/or additional power sources, heat conduction and radiation losses by light impurities. The boundary defining the solutions’ space having realistic temperature profile with small edge value takes mathematically the form of a density limit (DL). Compared to previous similar analyses the present work benefits from dealing with a more accurate set of equations. This refinement is elementary, but decisive, since it discloses a tenuous dependence of the DL on the thermal transport for configurations with an applied electric field. Thanks to this property, the DL scaling law is recovered almost identical for two largely different devices such as the ohmic tokamak and the reversed field pinch. In particular, they have in common a Greenwald scaling, linearly depending on the plasma current, quantitatively consistent with experimental results. In the tokamak case the DL dependence on any additional heating approximately follows a 0.5 power law, which is compatible with L-mode experiments. For a purely externally heated configuration, taken as a cylindrical approximation of the stellarator, the DL dependence on transport is found stronger. By adopting suitable transport models, DL takes on a Sudo-like form, in fair agreement with LHD experiments. Overall, the model provides a good zeroth-order quantitative description of the DL, applicable to widely different configurations.

  18. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  19. Automated volumetric breast density derived by shape and appearance modeling

    Science.gov (United States)

    Malkov, Serghei; Kerlikowske, Karla; Shepherd, John

    2014-03-01

    The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.

  20. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  1. Impaired High-Density Lipoprotein Anti-Oxidant Function Predicts Poor Outcome in Critically Ill Patients.

    Directory of Open Access Journals (Sweden)

    Lore Schrutka

    Full Text Available Oxidative stress affects clinical outcome in critically ill patients. Although high-density lipoprotein (HDL particles generally possess anti-oxidant capacities, deleterious properties of HDL have been described in acutely ill patients. The impact of anti-oxidant HDL capacities on clinical outcome in critically ill patients is unknown. We therefore analyzed the predictive value of anti-oxidant HDL function on mortality in an unselected cohort of critically ill patients.We prospectively enrolled 270 consecutive patients admitted to a university-affiliated intensive care unit (ICU and determined anti-oxidant HDL function using the HDL oxidant index (HOI. Based on their HOI, the study population was stratified into patients with impaired anti-oxidant HDL function and the residual study population.During a median follow-up time of 9.8 years (IQR: 9.2 to 10.0, 69% of patients died. Cox regression analysis revealed a significant and independent association between impaired anti-oxidant HDL function and short-term mortality with an adjusted HR of 1.65 (95% CI 1.22-2.24; p = 0.001 as well as 10-year mortality with an adj. HR of 1.19 (95% CI 1.02-1.40; p = 0.032 when compared to the residual study population. Anti-oxidant HDL function correlated with the amount of oxidative stress as determined by Cu/Zn superoxide dismutase (r = 0.38; p<0.001.Impaired anti-oxidant HDL function represents a strong and independent predictor of 30-day mortality as well as long-term mortality in critically ill patients.

  2. Modelling of the reactive sputtering process with non-uniform discharge current density and different temperature conditions

    Science.gov (United States)

    Vašina, P; Hytková, T; Eliáš, M

    2009-05-01

    The majority of current models of the reactive magnetron sputtering assume a uniform shape of the discharge current density and the same temperature near the target and the substrate. However, in the real experimental set-up, the presence of the magnetic field causes high density plasma to form in front of the cathode in the shape of a toroid. Consequently, the discharge current density is laterally non-uniform. In addition to this, the heating of the background gas by sputtered particles, which is usually referred to as the gas rarefaction, plays an important role. This paper presents an extended model of the reactive magnetron sputtering that assumes the non-uniform discharge current density and which accommodates the gas rarefaction effect. It is devoted mainly to the study of the behaviour of the reactive sputtering rather that to the prediction of the coating properties. Outputs of this model are compared with those that assume uniform discharge current density and uniform temperature profile in the deposition chamber. Particular attention is paid to the modelling of the radial variation of the target composition near transitions from the metallic to the compound mode and vice versa. A study of the target utilization in the metallic and compound mode is performed for two different discharge current density profiles corresponding to typical two pole and multipole magnetics available on the market now. Different shapes of the discharge current density were tested. Finally, hysteresis curves are plotted for various temperature conditions in the reactor.

  3. Modelling and simulation of double chamber microbial fuel cell. Cell voltage, power density and temperature variation with process parameters

    Energy Technology Data Exchange (ETDEWEB)

    Shankar, Ravi; Mondal, Prasenjit; Chand, Shri [Indian Institute of Technology Roorkee, Uttaranchal (India). Dept. of Chemical Engineering

    2013-11-01

    In the present paper steady state models of a double chamber glucose glutamic acid microbial fuel cell (GGA-MFC) under continuous operation have been developed and solved using Matlab 2007 software. The experimental data reported in a recent literature has been used for the validation of the models. The present models give prediction on the cell voltage and cell power density with 19-44% errors, which is less (up to 20%) than the errors on the prediction of cell voltage made in some recent literature for the same MFC where the effects of the difference in pH and ionic conductivity between anodic and cathodic solutions on cell voltage were not incorporated in model equations. It also describes the changes in anodic and cathodic chamber temperature due to the increase in substrate concentration and cell current density. Temperature profile across the membrane thickness has also been studied. (orig.)

  4. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code

    Science.gov (United States)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.

    2017-02-01

    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  5. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    Science.gov (United States)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  6. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  7. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  8. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  9. Modeling of the transient mobility in disordered organic semiconductors with a Gaussian density of states

    Science.gov (United States)

    Germs, W. Chr.; van der Holst, J. J. M.; van Mensfoort, S. L. M.; Bobbert, P. A.; Coehoorn, R.

    2011-10-01

    The charge-carrier mobility in organic semiconductors is often studied using non-steady-state experiments. However, energetic disorder can severely hamper the analysis due to the occurrence of a strong time dependence of the mobility caused by carrier relaxation. The multiple-trapping model is known to provide an accurate description of this effect. However, the value of the conduction level energy and the hopping attempt rate, which enter the model as free parameters, are not a priori known for a given material. We show how for the case of a Gaussian density of states both parameters can be deduced from the parameter values used to describe the measured dc current-voltage characteristics within the framework of the extended Gaussian disorder model. The approach is validated using three-dimensional Monte Carlo modeling. In the analysis, the charge-density dependence of the time-dependent mobility is included. The model is shown to successfully predict the low-frequency differential capacitance of sandwich-type devices based on a polyfluorene copolymer.

  10. Mammary gland density predicts the cancer inhibitory activity of the N-3 to N-6 ratio of dietary fat.

    Science.gov (United States)

    Zhu, Zongjian; Jiang, Weiqin; McGinley, John N; Prokopczyk, Bogden; Richie, John P; El Bayoumy, Karam; Manni, Andrea; Thompson, Henry J

    2011-10-01

    This study investigated the effect of a broad range of dietary ratios of n-3:n-6 fatty acids on mammary gland density and mammary cancer risk. Cancer was induced in female rats by N-methyl-N-nitrosourea. Purified diet that provided 30% of dietary kilocalories from fat was formulated to contain ratios of n-3:n-6 fatty acids from 25:1 to 1:25. Mammary gland density was determined by digital analysis, fatty acids by gas chromatography/flame ionization detection, and other plasma analytes via ELISA. Mammary gland density was reduced dose dependently at n-3:n-6 ratios from 1:1 to 25:1 (r = -0.477, P = 0.038), with a 20.3% decrease of mammary gland density between n-3:n-6 of 1:1 versus 25:1, P effect of the n-3:n-6 ratio on plasma leptin (decreased, P = 0.005) and adiponectin (increased, P tissue function was modulated. However, neither cytokine was predictive of mammary gland density. Plasma insulin-like growth factor I (IGF-I) decreased with increasing dietary n-3:n-6 ratio (P = 0.004) and was predictive of the changes in mammary gland density (r = 0.362, P effects in the presence or absence of hormonal regulation of carcinogenesis, and (iii) signaling pathways regulated by IGF-I are potential targets for further mechanistic investigation.

  11. Theoretical Uncertainties due to AGN Subgrid Models in Predictions of Galaxy Cluster Observable Properties

    CERN Document Server

    Yang, H -Y K; Ricker, P M

    2012-01-01

    Cosmological constraints derived from galaxy clusters rely on accurate predictions of cluster observable properties, in which feedback from active galactic nuclei (AGN) is a critical component. In order to model the physical effects due to supermassive black holes (SMBH) on cosmological scales, subgrid modeling is required, and a variety of implementations have been developed in the literature. However, theoretical uncertainties due to model and parameter variations are not yet well understood, limiting the predictive power of simulations including AGN feedback. By performing a detailed parameter sensitivity study in a single cluster using several commonly-adopted AGN accretion and feedback models with FLASH, we quantify the model uncertainties in predictions of cluster integrated properties. We find that quantities that are more sensitive to gas density have larger uncertainties (~20% for Mgas and a factor of ~2 for Lx at R500), whereas Tx, Ysz, and Yx are more robust (~10-20% at R500). To make predictions b...

  12. A density-functional-theory-based finite element model to study the mechanical properties of zigzag phosphorene nanotubes

    Science.gov (United States)

    Ansari, R.; Shahnazari, A.; Rouhi, S.

    2017-04-01

    In this paper, the density functional theory calculations are used to obtain the elastic properties of zigzag phosphorene nanotubes. Besides, based on the similarity between phosphorene nanotubes and a space-frame structure, a three-dimensional finite element model is proposed in which the atomic bonds are simulated by beam elements. The results of density functional theory are employed to compute the properties of the beam elements. Finally, using the proposed finite element model, the elastic modulus of the zigzag phosphorene nanotubes is computed. It is shown that phosphorene nanotubes with larger radii have larger Young's modulus. Comparing the results of finite element model with those of density functional theory, it is concluded that the proposed model can predict the elastic modulus of phosphorene nanotubes with a good accuracy.

  13. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    Science.gov (United States)

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  14. Testing how geophysics can reduce the uncertainty of groundwater model predictions

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    2014-01-01

    present a modeling platform that can be used to examine the conditions that support the use of each inversion approach for efficient and effective use of all data to constrain hydrologic models. We have developed a synthetic “test-bench environment” to test the advantages and limitations of alternative...... density and coverage. Finally, these synthetic data sets can be interpreted using any hydrogeophysical inversion scheme and the resulting predictions can be compared with predictions from the ‘true’ model. The modular nature of this platform allows for investigations of the role of inversion approach......, data density, data quality, and uncertainty in petrophysical relationships on the accuracy of hydrologic predictions. We will demonstrate the “test-bench” by using a hydrogeological system with a buried valley eroded into an impermeable low-resistivity substratum; the buried valley is filled...

  15. Spin densities from subsystem density-functional theory: Assessment and application to a photosynthetic reaction center complex model

    Energy Technology Data Exchange (ETDEWEB)

    Solovyeva, Alisa [Gorlaeus Laboratories, Leiden Institute of Chemistry, Leiden University, P.O. Box 9502, 2300 RA Leiden (Netherlands); Technical University Braunschweig, Institute for Physical and Theoretical Chemistry, Hans-Sommer-Str. 10, 38106 Braunschweig (Germany); Pavanello, Michele [Gorlaeus Laboratories, Leiden Institute of Chemistry, Leiden University, P.O. Box 9502, 2300 RA Leiden (Netherlands); Neugebauer, Johannes [Technical University Braunschweig, Institute for Physical and Theoretical Chemistry, Hans-Sommer-Str. 10, 38106 Braunschweig (Germany)

    2012-05-21

    Subsystem density-functional theory (DFT) is a powerful and efficient alternative to Kohn-Sham DFT for large systems composed of several weakly interacting subunits. Here, we provide a systematic investigation of the spin-density distributions obtained in subsystem DFT calculations for radicals in explicit environments. This includes a small radical in a solvent shell, a {pi}-stacked guanine-thymine radical cation, and a benchmark application to a model for the special pair radical cation, which is a dimer of bacteriochlorophyll pigments, from the photosynthetic reaction center of purple bacteria. We investigate the differences in the spin densities resulting from subsystem DFT and Kohn-Sham DFT calculations. In these comparisons, we focus on the problem of overdelocalization of spin densities due to the self-interaction error in DFT. It is demonstrated that subsystem DFT can reduce this problem, while it still allows to describe spin-polarization effects crossing the boundaries of the subsystems. In practical calculations of spin densities for radicals in a given environment, it may thus be a pragmatic alternative to Kohn-Sham DFT calculations. In our calculation on the special pair radical cation, we show that the coordinating histidine residues reduce the spin-density asymmetry between the two halves of this system, while inclusion of a larger binding pocket model increases this asymmetry. The unidirectional energy transfer in photosynthetic reaction centers is related to the asymmetry introduced by the protein environment.

  16. Plant physiological models of heat, water and photoinhibition stress for climate change modelling and agricultural prediction

    Science.gov (United States)

    Nicolas, B.; Gilbert, M. E.; Paw U, K. T.

    2015-12-01

    Soil-Vegetation-Atmosphere Transfer (SVAT) models are based upon well understood steady state photosynthetic physiology - the Farquhar-von Caemmerer-Berry model (FvCB). However, representations of physiological stress and damage have not been successfully integrated into SVAT models. Generally, it has been assumed that plants will strive to conserve water at higher temperatures by reducing stomatal conductance or adjusting osmotic balance, until potentially damaging temperatures and the need for evaporative cooling become more important than water conservation. A key point is that damage is the result of combined stresses: drought leads to stomatal closure, less evaporative cooling, high leaf temperature, less photosynthetic dissipation of absorbed energy, all coupled with high light (photosynthetic photon flux density; PPFD). This leads to excess absorbed energy by Photosystem II (PSII) and results in photoinhibition and damage, neither are included in SVAT models. Current representations of photoinhibition are treated as a function of PPFD, not as a function of constrained photosynthesis under heat or water. Thus, it seems unlikely that current models can predict responses of vegetation to climate variability and change. We propose a dynamic model of damage to Rubisco and RuBP-regeneration that accounts, mechanistically, for the interactions between high temperature, light, and constrained photosynthesis under drought. Further, these predictions are illustrated by key experiments allowing model validation. We also integrated this new framework within the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA). Preliminary results show that our approach can be used to predict reasonable photosynthetic dynamics. For instances, a leaf undergoing one day of drought stress will quickly decrease its maximum quantum yield of PSII (Fv/Fm), but it won't recover to unstressed levels for several days. Consequently, cumulative effect of photoinhibition on photosynthesis can cause

  17. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  18. Metal oxide-graphene field-effect transistor: interface trap density extraction model

    Directory of Open Access Journals (Sweden)

    Faraz Najam

    2016-09-01

    Full Text Available A simple to implement model is presented to extract interface trap density of graphene field effect transistors. The presence of interface trap states detrimentally affects the device drain current–gate voltage relationship Ids–Vgs. At the moment, there is no analytical method available to extract the interface trap distribution of metal-oxide-graphene field effect transistor (MOGFET devices. The model presented here extracts the interface trap distribution of MOGFET devices making use of available experimental capacitance–gate voltage Ctot–Vgs data and a basic set of equations used to define the device physics of MOGFET devices. The model was used to extract the interface trap distribution of 2 experimental devices. Device parameters calculated using the extracted interface trap distribution from the model, including surface potential, interface trap charge and interface trap capacitance compared very well with their respective experimental counterparts. The model enables accurate calculation of the surface potential affected by trap charge. Other models ignore the effect of trap charge and only calculate the ideal surface potential. Such ideal surface potential when used in a surface potential based drain current model will result in an inaccurate prediction of the drain current. Accurate calculation of surface potential that can later be used in drain current model is highlighted as a major advantage of the model.

  19. Protein distance constraints predicted by neural networks and probability density functions

    DEFF Research Database (Denmark)

    Lund, Ole; Frimand, Kenneth; Gorodkin, Jan;

    1997-01-01

    We predict interatomic C-α distances by two independent data driven methods. The first method uses statistically derived probability distributions of the pairwise distance between two amino acids, whilst the latter method consists of a neural network prediction approach equipped with windows taki...... method based on the predicted distances is presented. A homepage with software, predictions and data related to this paper is available at http://www.cbs.dtu.dk/services/CPHmodels/...

  20. Detonability of white dwarf plasma: turbulence models at low densities

    Science.gov (United States)

    Fenn, D.; Plewa, T.

    2017-06-01

    We study the conditions required to produce self-sustained detonations in turbulent, carbon-oxygen degenerate plasma at low densities. We perform a series of three-dimensional hydrodynamic simulations of turbulence driven with various degrees of compressibility. The average conditions in the simulations are representative of models of merging binary white dwarfs. We find that material with very short ignition times is abundant in case turbulence is driven compressively. This material forms contiguous structures that persist over many ignition times, and that we identify as prospective detonation kernels. Detailed analysis of prospective kernels reveals that these objects are centrally condensed and their shape is characterized by low curvature, supportive of self-sustained detonations. The key characteristic of the newly proposed detonation mechanism is thus high degree of compressibility of turbulent drive. The simulated detonation kernels have sizes notably smaller than the spatial resolution of any white dwarf merger simulation performed to date. The resolution required to resolve kernels is 0.1 km. Our results indicate a high probability of detonations in such well-resolved simulations of carbon-oxygen white dwarf mergers. These simulations will likely produce detonations in systems of lower total mass, thus broadening the population of white dwarf binaries capable of producing Type Ia supernovae. Consequently, we expect a downward revision of the lower limit of the total merger mass that is capable of producing a prompt detonation. We review application of the new detonation mechanism to various explosion scenarios of single, Chandrasekhar-mass white dwarfs.

  1. Modeling high-pressure adsorption of gas mixtures on activated carbon and coal using a simplified local-density model.

    Science.gov (United States)

    Fitzgerald, James E; Robinson, Robert L; Gasem, Khaled A M

    2006-11-07

    The simplified local-density (SLD) theory was investigated regarding its ability to provide accurate representations and predictions of high-pressure supercritical adsorption isotherms encountered in coalbed methane (CBM) recovery and CO2 sequestration. Attention was focused on the ability of the SLD theory to predict mixed-gas adsorption solely on the basis of information from pure gas isotherms using a modified Peng-Robinson (PR) equation of state (EOS). An extensive set of high-pressure adsorption measurements was used in this evaluation. These measurements included pure and binary mixture adsorption measurements for several gas compositions up to 14 MPa for Calgon F-400 activated carbon and three water-moistened coals. Also included were ternary measurements for the activated carbon and one coal. For the adsorption of methane, nitrogen, and CO2 on dry activated carbon, the SLD-PR can predict the component mixture adsorption within about 2.2 times the experimental uncertainty on average solely on the basis of pure-component adsorption isotherms. For the adsorption of methane, nitrogen, and CO2 on two of the three wet coals, the SLD-PR model can predict the component adsorption within the experimental uncertainties on average for all feed fractions (nominally molar compositions of 20/80, 40/60, 60/40, and 80/20) of the three binary gas mixture combinations, although predictions for some specific feed fractions are outside of their experimental uncertainties.

  2. Novel modeling of combinatorial miRNA targeting identifies SNP with potential role in bone density.

    Directory of Open Access Journals (Sweden)

    Claudia Coronnello

    Full Text Available MicroRNAs (miRNAs are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting, a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential

  3. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  4. Bioinorganic Chemistry Modeled with the TPSSh Density Functional

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2008-01-01

    In this work, the TPSSh density functional has been benchmarked against a test set of experimental structures and bond energies for 80 transition-metal-containing diatomics. It is found that the TPSSh functional gives structures of the same quality as other commonly used hybrid and nonhybrid func...... promising density functional for use and further development within the field of bioinorganic chemistry....

  5. Knowledge-based artificial neural network model to predict the properties of alpha+ beta titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Banu, P. S. Noori; Rani, S. Devaki [Dept. of Metallurgical Engineering, Jawaharlal Nehru Technological University, HyderabadI (India)

    2016-08-15

    In view of emerging applications of alpha+beta titanium alloys in aerospace and defense, we have aimed to develop a Back propagation neural network (BPNN) model capable of predicting the properties of these alloys as functions of alloy composition and/or thermomechanical processing parameters. The optimized BPNN model architecture was based on the sigmoid transfer function and has one hidden layer with ten nodes. The BPNN model showed excellent predictability of five properties: Tensile strength (r: 0.96), yield strength (r: 0.93), beta transus (r: 0.96), specific heat capacity (r: 1.00) and density (r: 0.99). The developed BPNN model was in agreement with the experimental data in demonstrating the individual effects of alloying elements in modulating the above properties. This model can serve as the platform for the design and development of new alpha+beta titanium alloys in order to attain desired strength, density and specific heat capacity.

  6. Modeling white sturgeon movement in a reservoir: The effect of water quality and sturgeon density

    Science.gov (United States)

    Sullivan, A.B.; Jager, H.I.; Myers, R.

    2003-01-01

    We developed a movement model to examine the distribution and survival of white sturgeon (Acipenser transmontanus) in a reservoir subject to large spatial and temporal variation in dissolved oxygen and temperature. Temperature and dissolved oxygen were simulated by a CE-QUAL-W2 model of Brownlee Reservoir, Idaho for a typical wet, normal, and dry hydrologic year. We compared current water quality conditions to scenarios with reduced nutrient inputs to the reservoir. White sturgeon habitat quality was modeled as a function of temperature, dissolved oxygen and, in some cases, suitability for foraging and depth. We assigned a quality index to each cell along the bottom of the reservoir. The model simulated two aspects of daily movement. Advective movement simulated the tendency for animals to move toward areas with high habitat quality, and diffusion simulated density dependent movement away from areas with high sturgeon density in areas with non-lethal habitat conditions. Mortality resulted when sturgeon were unable to leave areas with lethal temperature or dissolved oxygen conditions. Water quality was highest in winter and early spring and lowest in mid to late summer. Limiting nutrient inputs reduced the area of Brownlee Reservoir with lethal conditions for sturgeon and raised the average habitat suitability throughout the reservoir. Without movement, simulated white sturgeon survival ranged between 45 and 89%. Allowing movement raised the predicted survival of sturgeon under all conditions to above 90% as sturgeon avoided areas with low habitat quality. ?? 2003 Elsevier B.V. All rights reserved.

  7. PREDICTIONS OF ION PRODUCTION RATES AND ION NUMBER DENSITIES WITHIN THE DIAMAGNETIC CAVITY OF COMET 67P/CHURYUMOV-GERASIMENKO AT PERIHELION

    Energy Technology Data Exchange (ETDEWEB)

    Vigren, E.; Galand, M., E-mail: e.vigren@imperial.ac.uk [Department of Physics, Imperial College London, London SW7 2AZ (United Kingdom)

    2013-07-20

    We present a one-dimensional ion chemistry model of the diamagnetic cavity of comet 67P/Churyumov-Gerasimenko, the target comet for the ESA Rosetta mission. We solve the continuity equations for ionospheric species and predict number densities of electrons and selected ions considering only gas-phase reactions. We apply the model to the subsolar direction and consider conditions expected to be encountered by Rosetta at perihelion (1.29 AU) in 2015 August. Our default simulation predicts a maximum electron number density of {approx}8 Multiplication-Sign 10{sup 4} cm{sup -3} near the surface of the comet, while the electron number densities for cometocentric distances r > 10 km are approximately proportional to 1/r {sup 1.23} assuming that the electron temperature is equal to the neutral temperature. We show that even a small mixing ratio ({approx}0.3%-1%) of molecules having higher proton affinity than water is sufficient for the proton transfer from H{sub 3}O{sup +} to occur so readily that other ions than H{sub 3}O{sup +}, such as NH{sub 4} {sup +} or CH{sub 3}OH{sub 2} {sup +}, become dominant in terms of volume mixing ratio in part of, if not throughout, the diamagnetic cavity. Finally, we test how the predicted electron and ion densities are influenced by changes of model input parameters, including the neutral background, the impinging EUV solar spectrum, the solar zenith angle, the cross sections for photo- and electron-impact processes, the electron temperature profile, and the temperature dependence of ion-neutral reactions.

  8. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  9. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  10. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  11. Use of complex hydraulic variables to predict the distribution and density of unionids in a side channel of the Upper Mississippi River

    Science.gov (United States)

    Steuer, J.J.; Newton, T.J.; Zigler, S.J.

    2008-01-01

    Previous attempts to predict the importance of abiotic and biotic factors to unionids in large rivers have been largely unsuccessful. Many simple physical habitat descriptors (e.g., current velocity, substrate particle size, and water depth) have limited ability to predict unionid density. However, more recent studies have found that complex hydraulic variables (e.g., shear velocity, boundary shear stress, and Reynolds number) may be more useful predictors of unionid density. We performed a retrospective analysis with unionid density, current velocity, and substrate particle size data from 1987 to 1988 in a 6-km reach of the Upper Mississippi River near Prairie du Chien, Wisconsin. We used these data to model simple and complex hydraulic variables under low and high flow conditions. We then used classification and regression tree analysis to examine the relationships between hydraulic variables and unionid density. We found that boundary Reynolds number, Froude number, boundary shear stress, and grain size were the best predictors of density. Models with complex hydraulic variables were a substantial improvement over previously published discriminant models and correctly classified 65-88% of the observations for the total mussel fauna and six species. These data suggest that unionid beds may be constrained by threshold limits at both ends of the flow regime. Under low flow, mussels may require a minimum hydraulic variable (Rez.ast;, Fr) to transport nutrients, oxygen, and waste products. Under high flow, areas with relatively low boundary shear stress may provide a hydraulic refuge for mussels. Data on hydraulic preferences and identification of other conditions that constitute unionid habitat are needed to help restore and enhance habitats for unionids in rivers. ?? 2008 Springer Science+Business Media B.V.

  12. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  13. Density functional theory prediction of pKa for carboxylated single-wall carbon nanotubes and graphene

    Science.gov (United States)

    Li, Hao; Fu, Aiping; Xue, Xuyan; Guo, Fengna; Huai, Wenbo; Chu, Tianshu; Wang, Zonghua

    2017-06-01

    Density functional calculations have been performed to investigate the acidities for the carboxylated single-wall carbon nanotubes and graphene. The pKa values for different COOH-functionalized models with varying lengths, diameters and chirality of nanotubes and with different edges of graphene were predicted using the SMD/M05-2X/6-31G* method combined with two universal thermodynamic cycles. The effects of following factors, such as, the functionalized position of carboxyl group, the Stone-Wales and single vacancy defects, on the acidity of the functionalized nanotube and graphene have also been evaluated. The deprotonated species have undergone decarboxylation when the hybridization mode of the carbon atom at the functionalization site changed from sp2 to sp3 both for the tube and graphene. The knowledge of the pKa values of the carboxylated nanotube and graphene could be of great help for the understanding of the nanocarbon materials in many diverse areas, including environmental protection, catalysis, electrochemistry and biochemistry.

  14. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  15. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  16. Density functional theory based tight binding study on theoretical prediction of low-density nanoporous phases ZnO semiconductor materials

    Science.gov (United States)

    Tuoc, Vu Ngoc; Doan Huan, Tran; Viet Minh, Nguyen; Thi Thao, Nguyen

    2016-06-01

    Polymorphs or phases - different inorganic solids structures of the same composition usually have widely differing properties and applications, thereby synthesizing or predicting new classes of polymorphs for a certain compound is of great significance and has been gaining considerable interest. Herein, we perform a density functional theory based tight binding (DFTB) study on theoretical prediction of several new phases series of II-VI semiconductor material ZnO nanoporous phases from their bottom-up building blocks. Among these, three phases are reported for the first time, which could greatly expand the family of II- VI compound nanoporous phases. We also show that all these generally can be categorized similarly to the aluminosilicate zeolites inorganic open-framework materials. The hollow cage structure of the corresponding building block ZnkOk (k= 9, 12, 16) is well preserved in all of them, which leads to their low-density nanoporous and high flexibility. Additionally the electronic wide-energy gap of the individual ZnkOk is also retained. Our study reveals that they are all semiconductor materials with a large band gap. Further, this study is likely to be the common for II-VI semiconductor compounds and will be helpful for extending their range of properties and applications.

  17. Structure of the Lithosphere in Central Europe: Integrated Density Modelling

    Science.gov (United States)

    Bielik, M.; Grinč, M.; Zeyen, H. J.; Plašienka, D.; Pasteka, R.; Krajňák, M.; Bošanský, M.; Mikuška, J.

    2014-12-01

    Firstly, we present new results related to the lithospheric structure and tectonics of the Central Europe and the Western Carpathians. For geophysical study of the lithosphere in Central Europe we calculated four original 2D lithosphere-scales transects crossing this area from the West European Platform in the North to the Aegean Sea in the South and from the Adriatic Sea in the West to the East European Platform in the East. Modelling is based on the joint interpretation of gravity, geoid, topography and surface heat flow data with temperature-dependent density. Wherever possible, crustal structure is constrained by seismic data. The thickness of the lithosphere decreases from the older and colder platforms to the younger and hotter Pannonian Basin with a maximum thickness under the Eastern and Southern Carpathians. The thickness of the Carpathian arc lithosphere varies between 150 km in the North (the Western Carpathians) and about 300 km in the Vrancea zone (the Eastern and Southern Carpathian junction). In the Platform areas it is between 120 and 150 km and in the Pannonian Basin it is about 70 km. The models show that the Moesian Platform is overthrust from the North by the Southern Carpathians and from the South by the Balkanides and characterized by bending of this platform. In all transects, the thickest crust is found underneath the Carpathian Mountains or, as in the case of the Vrancea area, under their immediate foreland. The thickest crust outside the orogens is modelled for the Moesian Platform with Moho depths of up to 45 km. The thinnest crust is located under the Pannonian Basin with about 26-27 km. Secondly, our presentation deals with construction of the stripped gravity map in the Turiec Basin, which represents typical intramontane Neogene depression of the Western Carpathians. Based on this new and original gravity map corrected by regional gravity effect we were able to interpret the geological structure and tectonics of this sedimentary basin

  18. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  19. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  20. Empirical model predicting the layer thickness and porosity of p-type mesoporous silicon

    Science.gov (United States)

    Wolter, Sascha J.; Geisler, Dennis; Hensen, Jan; Köntges, Marc; Kajari-Schröder, Sarah; Bahnemann, Detlef W.; Brendel, Rolf

    2017-04-01

    Porous silicon is a promising material for a wide range of applications because of its versatile layer properties and the convenient preparation by electrochemical etching. Nevertheless, the quantitative dependency of the layer thickness and porosity on the etching process parameters is yet unknown. We have developed an empirical model to predict the porosity and layer thickness of p-type mesoporous silicon prepared by electrochemical etching. The impact of the process parameters such as current density, etching time and concentration of hydrogen fluoride is evaluated by ellipsometry. The main influences on the porosity of the porous silicon are the current density, the etching time and their product while the etch rate is dominated by the current density, the concentration of hydrogen fluoride and their product. The developed model predicts the resulting layer properties of a certain porosification process and can, for example be used to enhance the utilization of the employed chemicals.