#### Sample records for random coefficient model

1. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

Science.gov (United States)

Rovine, Michael J.; Molenaar, Peter C. M.

2000-01-01

Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

2. A Note on the Correlated Random Coefficient Model

DEFF Research Database (Denmark)

Kolodziejczyk, Christophe

In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...

3. Simulating WTP Values from Random-Coefficient Models

OpenAIRE

Maurus Rischatsch

2009-01-01

Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...

4. Least squares estimation in a simple random coefficient autoregressive model

DEFF Research Database (Denmark)

Johansen, S; Lange, T

2013-01-01

The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...

5. Random effects coefficient of determination for mixed and meta-analysis models.

Science.gov (United States)

Demidenko, Eugene; Sargent, James; Onega, Tracy

2012-01-01

The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

6. Modeling Ontario regional electricity system demand using a mixed fixed and random coefficients approach

Energy Technology Data Exchange (ETDEWEB)

Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)

1989-12-01

In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.

7. A special covariance structure for random coefficient models with both between and within covariates

International Nuclear Information System (INIS)

Riedel, K.S.

1990-07-01

We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

8. A comparison of two least-squared random coefficient autoregressive models: with and without autocorrelated errors

OpenAIRE

Autcha Araveeporn

2013-01-01

This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...

9. Cost-effective degradation test plan for a nonlinear random-coefficients model

International Nuclear Information System (INIS)

Kim, Seong-Joon; Bae, Suk Joo

2013-01-01

The determination of requisite sample size and the inspection schedule considering both testing cost and accuracy has been an important issue in the degradation test. This paper proposes a cost-effective degradation test plan in the context of a nonlinear random-coefficients model, while meeting some precision constraints for failure-time distribution. We introduce a precision measure to quantify the information losses incurred by reducing testing resources. The precision measure is incorporated into time-varying cost functions to reflect real circumstances. We apply a hybrid genetic algorithm to general cost optimization problem with reasonable constraints on the level of testing precision in order to determine a cost-effective inspection scheme. The proposed method is applied to the degradation data of plasma display panels (PDPs) following a bi-exponential degradation model. Finally, sensitivity analysis via simulation is provided to evaluate the robustness of the proposed degradation test plan.

10. Determination of Nonlinear Stiffness Coefficients for Finite Element Models with Application to the Random Vibration Problem

Science.gov (United States)

Muravyov, Alexander A.

1999-01-01

In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.

11. Algebraic polynomials with random coefficients

Directory of Open Access Journals (Sweden)

K. Farahmand

2002-01-01

Full Text Available This paper provides an asymptotic value for the mathematical expected number of points of inflections of a random polynomial of the form a0(ω+a1(ω(n11/2x+a2(ω(n21/2x2+…an(ω(nn1/2xn when n is large. The coefficients {aj(w}j=0n, w∈Ω are assumed to be a sequence of independent normally distributed random variables with means zero and variance one, each defined on a fixed probability space (A,Ω,Pr. A special case of dependent coefficients is also studied.

12. Converting Sabine absorption coefficients to random incidence absorption coefficients

DEFF Research Database (Denmark)

Jeong, Cheol-Ho

2013-01-01

are suggested: An optimization method for the surface impedances for locally reacting absorbers, the flow resistivity for extendedly reacting absorbers, and the flow resistance for fabrics. With four porous type absorbers, the conversion methods are validated. For absorbers backed by a rigid wall, the surface...... coefficients to random incidence absorption coefficients are proposed. The overestimations of the Sabine absorption coefficient are investigated theoretically based on Miki's model for porous absorbers backed by a rigid wall or an air cavity, resulting in conversion factors. Additionally, three optimizations...... impedance optimization produces the best results, while the flow resistivity optimization also yields reasonable results. The flow resistivity and flow resistance optimization for extendedly reacting absorbers are also found to be successful. However, the theoretical conversion factors based on Miki's model...

13. Gaussian Mixture Random Coefficient model based framework for SHM in structures with time-dependent dynamics under uncertainty

Science.gov (United States)

Avendaño-Valencia, Luis David; Fassois, Spilios D.

2017-12-01

The problem of vibration-based damage diagnosis in structures characterized by time-dependent dynamics under significant environmental and/or operational uncertainty is considered. A stochastic framework consisting of a Gaussian Mixture Random Coefficient model of the uncertain time-dependent dynamics under each structural health state, proper estimation methods, and Bayesian or minimum distance type decision making, is postulated. The Random Coefficient (RC) time-dependent stochastic model with coefficients following a multivariate Gaussian Mixture Model (GMM) allows for significant flexibility in uncertainty representation. Certain of the model parameters are estimated via a simple procedure which is founded on the related Multiple Model (MM) concept, while the GMM weights are explicitly estimated for optimizing damage diagnostic performance. The postulated framework is demonstrated via damage detection in a simple simulated model of a quarter-car active suspension with time-dependent dynamics and considerable uncertainty on the payload. Comparisons with a simpler Gaussian RC model based method are also presented, with the postulated framework shown to be capable of offering considerable improvement in diagnostic performance.

14. Sabine absorption coefficients to random incidence absorption coefficients

DEFF Research Database (Denmark)

Jeong, Cheol-Ho

2014-01-01

into random incidence absorption coefficients for porous absorbers are investigated. Two optimization-based conversion methods are suggested: the surface impedance estimation for locally reacting absorbers and the flow resistivity estimation for extendedly reacting absorbers. The suggested conversion methods...

15. Random incidence absorption coefficients of porous absorbers based on local and extended reaction models

DEFF Research Database (Denmark)

Jeong, Cheol-Ho

2011-01-01

resistivity and the absorber thickness on the difference between the two surface reaction models are examined and discussed. For a porous absorber backed by a rigid surface, the local reaction models give errors of less than 10% if the thickness exceeds 120 mm for a flow resistivity of 5000 Nm-4s. As the flow...... incidence acoustical characteristics of typical building elements made of porous materials assuming extended and local reaction. For each surface reaction, five well-established wave propagation models, the Delany-Bazley, Miki, Beranek, Allard-Champoux, and Biot model, are employed. Effects of the flow...... resistivity doubles, a decrease in the required thickness by 25 mm is observed to achieve the same amount of error. For an absorber backed by an air gap, the thickness ratio between the material and air cavity is important. If the absorber thickness is approximately 40% of the cavity depth, the local reaction...

16. Full Random Coefficients Multilevel Modeling of the Relationship between Land Use and Trip Time on Weekdays and Weekends

Directory of Open Access Journals (Sweden)

Tae-Hyoung Tommy Gim

2017-10-01

Full Text Available Interests in weekend trips are increasing, but few have studied how they are affected by land use. In this study, we analyze the relationship between compact land use characteristics and trip time in Seoul, Korea by comparing two research models, each of which uses the weekday and weekend data of the same travelers. To secure sufficient numbers of subjects and groups, full random coefficients multilevel models define the trip as level one and the neighborhood as level two, and find that level-two land use characteristics account for less variation in trip time than level-one individual characteristics. At level one, weekday trip time is found to be reduced by the choice of the automobile as a travel mode, but not by its ownership per se. In addition, it becomes reduced if made by high income travelers and extended to travel to quality jobs. Among four land use characteristics at level two, population density, road connectivity, and subway availability are shown to be significant in the weekday model. Only subway availability has a positive relationship with trip time and this finding is consistent with the level-one result that the choice of automobile alternatives increases trip time. The other land use characteristic, land use balance, turns out to be a single significant land use variable in the weekend model, implying that it is concerned mainly with non-work, non-mandatory travel.

17. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

Science.gov (United States)

Fang, Yun; Wu, Hulin; Zhu, Li-Xing

2011-07-01

We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

18. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

Science.gov (United States)

Rights, Jason D; Sterba, Sonya K

2016-11-01

Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

19. A Monte Carlo experiment to analyze the curse of dimensionality in estimating random coefficients models with a full variance–covariance matrix

DEFF Research Database (Denmark)

Cherchi, Elisabetta; Guevara, Cristian Angelo

2012-01-01

of parameters increases is usually known as the “curse of dimensionality” in the simulation methods. We investigate this problem in the case of the random coefficients Logit model. We compare the traditional Maximum Simulated Likelihood (MSL) method with two alternative estimation methods: the Expectation......–Maximization (EM) and the Laplace Approximation (HH) methods that do not require simulation. We use Monte Carlo experimentation to investigate systematically the performance of the methods under different circumstances, including different numbers of variables, sample sizes and structures of the variance...

20. Random errors in the magnetic field coefficients of superconducting magnets

International Nuclear Information System (INIS)

Herrera, J.; Hogue, R.; Prodell, A.; Wanderer, P.; Willen, E.

1985-01-01

Random errors in the multipole magnetic coefficients of superconducting magnet have been of continuing interest in accelerator research. The Superconducting Super Collider (SSC) with its small magnetic aperture only emphasizes this aspect of magnet design, construction, and measurement. With this in mind, we present a magnet model which mirrors the structure of a typical superconducting magnet. By taking advantage of the basic symmetries of a dipole magnet, we use this model to fit the measured multipole rms widths. The fit parameters allow us then to predict the values of the rms multipole errors expected for the SSC dipole reference design D, SSC-C5. With the aid of first-order perturbation theory, we then give an estimate of the effect of these random errors on the emittance growth of a proton beam stored in an SSC. 10 refs., 6 figs., 2 tabs

1. Reproducibility of The Random Incidence Absorption Coefficient Converted From the Sabine Absorption Coefficient

DEFF Research Database (Denmark)

Jeong, Cheol-Ho; Chang, Ji-ho

2015-01-01

largely depending on the test room. Several conversion methods for porous absorbers from the Sabine absorption coefficient to the random incidence absorption coefficient were suggested by considering the finite size of a test specimen and non-uniformly incident energy onto the specimen, which turned out...... resistivity optimization outperforms the surface impedance optimization in terms of the reproducibility....

2. Stable Parameter Estimation for Autoregressive Equations with Random Coefficients

Directory of Open Access Journals (Sweden)

V. B. Goryainov

2014-01-01

Full Text Available In recent yearsthere has been a growing interest in non-linear time series models. They are more flexible than traditional linear models and allow more adequate description of real data. Among these models a autoregressive model with random coefficients plays an important role. It is widely used in various fields of science and technology, for example, in physics, biology, economics and finance. The model parameters are the mean values of autoregressive coefficients. Their evaluation is the main task of model identification. The basic method of estimation is still the least squares method, which gives good results for Gaussian time series, but it is quite sensitive to even small disturbancesin the assumption of Gaussian observations. In this paper we propose estimates, which generalize the least squares estimate in the sense that the quadratic objective function is replaced by an arbitrary convex and even function. Reasonable choice of objective function allows you to keep the benefits of the least squares estimate and eliminate its shortcomings. In particular, you can make it so that they will be almost as effective as the least squares estimate in the Gaussian case, but almost never loose in accuracy with small deviations of the probability distribution of the observations from the Gaussian distribution.The main result is the proof of consistency and asymptotic normality of the proposed estimates in the particular case of the one-parameter model describing the stationary process with finite variance. Another important result is the finding of the asymptotic relative efficiency of the proposed estimates in relation to the least squares estimate. This allows you to compare the two estimates, depending on the probability distribution of innovation process and of autoregressive coefficients. The results can be used to identify an autoregressive process, especially with nonGaussian nature, and/or of autoregressive processes observed with gross

3. Bacteria Reduction In Ponds Under Random Coefficients ...

African Journals Online (AJOL)

4. Drag coefficient Variability and Thermospheric models

Science.gov (United States)

Moe, Kenneth

Satellite drag coefficients depend upon a variety of factors: The shape of the satellite, its altitude, the eccentricity of its orbit, the temperature and mean molecular mass of the ambient atmosphere, and the time in the sunspot cycle. At altitudes where the mean free path of the atmospheric molecules is large compared to the dimensions of the satellite, the drag coefficients can be determined from the theory of free-molecule flow. The dependence on altitude is caused by the concentration of atomic oxygen which plays an important role by its ability to adsorb on the satellite surface and thereby affect the energy loss of molecules striking the surface. The eccentricity of the orbit determines the satellite velocity at perigee, and therefore the energy of the incident molecules relative to the energy of adsorption of atomic oxygen atoms on the surface. The temperature of the ambient atmosphere determines the extent to which the random thermal motion of the molecules influences the momentum transfer to the satellite. The time in the sunspot cycle affects the ambient temperature as well as the concentration of atomic oxygen at a particular altitude. Tables and graphs will be used to illustrate the variability of drag coefficients. Before there were any measurements of gas-surface interactions in orbit, Izakov and Cook independently made an excellent estimate that the drag coefficient of satellites of compact shape would be 2.2. That numerical value, independent of altitude, was used by Jacchia to construct his model from the early measurements of satellite drag. Consequently, there is an altitude dependent bias in the model. From the sparce orbital experiments that have been done, we know that the molecules which strike satellite surfaces rebound in a diffuse angular distribution with an energy loss given by the energy accommodation coefficient. As more evidence accumulates on the energy loss, more realistic drag coefficients are being calculated. These improved drag

5. Signal intensity of normal breast tissue at MR mammography on midfield: Applying a random coefficient model evaluating the effect of doubling the contrast dose

Energy Technology Data Exchange (ETDEWEB)

Marklund, Mette [Parker Institute: Imaging Unit, Frederiksberg Hospital (Denmark)], E-mail: mm@frh.regionh.dk; Christensen, Robin [Parker Institute: Musculoskeletal Statistics Unit, Frederiksberg Hospital (Denmark)], E-mail: robin.christensen@frh.regionh.dk; Torp-Pedersen, Soren [Parker Institute: Imaging Unit, Frederiksberg Hospital (Denmark)], E-mail: stp@frh.regionh.dk; Thomsen, Carsten [Department of Radiology, Rigshospitalet, University of Copenhagen (Denmark)], E-mail: carsten.thomsen@rh.regionh.dk; Nolsoe, Christian P. [Department of Radiology, Koge Hospital (Denmark)], E-mail: cnolsoe@dadlnet.dk

2009-01-15

Purpose: To prospectively investigate the effect on signal intensity (SI) of healthy breast parenchyma on magnetic resonance mammography (MRM) when doubling the contrast dose from 0.1 to 0.2 mmol/kg bodyweight. Materials and methods: Informed consent and institutional review board approval were obtained. Twenty-five healthy female volunteers (median age: 24 years (range: 21-37 years) and median bodyweight: 65 kg (51-80 kg)) completed two dynamic MRM examinations on a 0.6 T open scanner. The inter-examination time was 24 h (23.5-25 h). The following sequences were applied: axial T2W TSE and an axial dynamic T1W FFED, with a total of seven frames. At day 1, an i.v. gadolinium (Gd) bolus injection of 0.1 mmol/kg bodyweight (Omniscan) (low) was administered. On day 2, the contrast dose was increased to 0.2 mmol/kg (high). Injection rate was 2 mL/s (day 1) and 4 mL/s (day 2). Any use of estrogen containing oral contraceptives (ECOC) was recorded. Post-processing with automated subtraction, manually traced ROI (region of interest) and recording of the SI was performed. A random coefficient model was applied. Results: We found an SI increase of 24.2% and 40% following the low and high dose, respectively (P < 0.0001); corresponding to a 65% (95% CI: 37-99%) SI increase, indicating a moderate saturation. Although not statistically significant (P = 0.06), the results indicated a tendency, towards lower maximal SI in the breast parenchyma of ECOC users compared to non-ECOC users. Conclusion: We conclude that the contrast dose can be increased from 0.1 to 0.2 mmol/kg bodyweight, if a better contrast/noise relation is desired but increasing the contrast dose above 0.2 mmol/kg bodyweight is not likely to improve the enhancement substantially due to the moderate saturation observed. Further research is needed to determine the impact of ECOC on the relative enhancement ratio, and further studies are needed to determine if a possible use of ECOC should be considered a compromising

6. Signal intensity of normal breast tissue at MR mammography on midfield: Applying a random coefficient model evaluating the effect of doubling the contrast dose

International Nuclear Information System (INIS)

Marklund, Mette; Christensen, Robin; Torp-Pedersen, Soren; Thomsen, Carsten; Nolsoe, Christian P.

2009-01-01

Purpose: To prospectively investigate the effect on signal intensity (SI) of healthy breast parenchyma on magnetic resonance mammography (MRM) when doubling the contrast dose from 0.1 to 0.2 mmol/kg bodyweight. Materials and methods: Informed consent and institutional review board approval were obtained. Twenty-five healthy female volunteers (median age: 24 years (range: 21-37 years) and median bodyweight: 65 kg (51-80 kg)) completed two dynamic MRM examinations on a 0.6 T open scanner. The inter-examination time was 24 h (23.5-25 h). The following sequences were applied: axial T2W TSE and an axial dynamic T1W FFED, with a total of seven frames. At day 1, an i.v. gadolinium (Gd) bolus injection of 0.1 mmol/kg bodyweight (Omniscan) (low) was administered. On day 2, the contrast dose was increased to 0.2 mmol/kg (high). Injection rate was 2 mL/s (day 1) and 4 mL/s (day 2). Any use of estrogen containing oral contraceptives (ECOC) was recorded. Post-processing with automated subtraction, manually traced ROI (region of interest) and recording of the SI was performed. A random coefficient model was applied. Results: We found an SI increase of 24.2% and 40% following the low and high dose, respectively (P < 0.0001); corresponding to a 65% (95% CI: 37-99%) SI increase, indicating a moderate saturation. Although not statistically significant (P = 0.06), the results indicated a tendency, towards lower maximal SI in the breast parenchyma of ECOC users compared to non-ECOC users. Conclusion: We conclude that the contrast dose can be increased from 0.1 to 0.2 mmol/kg bodyweight, if a better contrast/noise relation is desired but increasing the contrast dose above 0.2 mmol/kg bodyweight is not likely to improve the enhancement substantially due to the moderate saturation observed. Further research is needed to determine the impact of ECOC on the relative enhancement ratio, and further studies are needed to determine if a possible use of ECOC should be considered a compromising

7. Comparing linear probability model coefficients across groups

DEFF Research Database (Denmark)

Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

2015-01-01

of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

8. Modeling Ballasted Tracks for Runoff Coefficient C

Science.gov (United States)

2012-08-01

In this study, the Regional Transportation District (RTD)s light rail tracks were modeled to determine the Rational Method : runoff coefficient, C, values corresponding to ballasted tracks. To accomplish this, a laboratory study utilizing a : rain...

9. A nonparametric random coefficient approach for life expectancy growth using a hierarchical mixture likelihood model with application to regional data from North Rhine-Westphalia (Germany).

Science.gov (United States)

Böhning, Dankmar; Karasek, Sarah; Terschüren, Claudia; Annuß, Rolf; Fehr, Rainer

2013-03-09

Life expectancy is of increasing prime interest for a variety of reasons. In many countries, life expectancy is growing linearly, without any indication of reaching a limit. The state of North Rhine-Westphalia (NRW) in Germany with its 54 districts is considered here where the above mentioned growth in life expectancy is occurring as well. However, there is also empirical evidence that life expectancy is not growing linearly at the same level for different regions. To explore this situation further a likelihood-based cluster analysis is suggested and performed. The modelling uses a nonparametric mixture approach for the latent random effect. Maximum likelihood estimates are determined by means of the EM algorithm and the number of components in the mixture model are found on the basis of the Bayesian Information Criterion. Regions are classified into the mixture components (clusters) using the maximum posterior allocation rule. For the data analyzed here, 7 components are found with a spatial concentration of lower life expectancy levels in a centre of NRW, formerly an enormous conglomerate of heavy industry, still the most densely populated area with Gelsenkirchen having the lowest level of life expectancy growth for both genders. The paper offers some explanations for this fact including demographic and socio-economic sources. This case study shows that life expectancy growth is widely linear, but it might occur on different levels.

10. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

Science.gov (United States)

Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

2012-01-01

Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

11. Random errors in the magnetic field coefficients of superconducting quadrupole magnets

International Nuclear Information System (INIS)

Herrera, J.; Hogue, R.; Prodell, A.; Thompson, P.; Wanderer, P.; Willen, E.

1987-01-01

The random multipole errors of superconducting quadrupoles are studied. For analyzing the multipoles which arise due to random variations in the size and locations of the current blocks, a model is outlined which gives the fractional field coefficients from the current distributions. With this approach, based on the symmetries of the quadrupole magnet, estimates are obtained of the random multipole errors for the arc quadrupoles envisioned for the Relativistic Heavy Ion Collider and for a single-layer quadrupole proposed for the Superconducting Super Collider

12. Modelling of power-reactivity coefficient measurement

International Nuclear Information System (INIS)

Strmensky, C.; Petenyi, V.; Jagrik, J.; Minarcin, M.; Hascik, R.; Toth, L.

2005-01-01

Report describes results of modeling of power-reactivity coefficient analysis on power-level. In paper we calculate values of discrepancies arisen during transient process. These discrepancies can be arisen as result of experiment evaluation and can be caused by disregard of 3D effects on neutron distribution. The results are critically discussed (Authors)

13. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

DEFF Research Database (Denmark)

Cherchi, Elisabetta; Guevara, Cristian

2012-01-01

with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

14. Varying coefficients model with measurement error.

Science.gov (United States)

Li, Liang; Greene, Tom

2008-06-01

We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

15. Estimating filtration coefficients for straining from percolation and random walk theories

DEFF Research Database (Denmark)

Yuan, Hao; Shapiro, Alexander; You, Zhenjiang

2012-01-01

In this paper, laboratory challenge tests are carried out under unfavorable attachment conditions, so that size exclusion or straining is the only particle capture mechanism. The experimental results show that far above the percolation threshold the filtration coefficients are not proportional...... size exclusion theory or the model of parallel tubes with mixing chambers, where the filtration coefficients are proportional to the flux through smaller pores, and the predicted penetration depths are much lower. A special capture mechanism is proposed, which makes it possible to explain...... the experimentally observed power law dependencies of filtration coefficients and large penetration depths of particles. Such a capture mechanism is realized in a 2D pore network model with periodical boundaries with the random walk of particles on the percolation lattice. Geometries of infinite and finite clusters...

16. Determination of coefficient matrices for ARMA model

International Nuclear Information System (INIS)

Tran Dinh Tri.

1990-10-01

A new recursive algorithm for determining coefficient matrices of ARMA model from measured data is presented. The Yule-Walker equations for the case of ARMA model are derived from the ARMA innovation equation. The recursive algorithm is based on choosing appropriate form of the operator functions and suitable representation of the (n+1)-th order operator functions according to ones with the lower order. Two cases, when the order of the AR part is equal to one of the MA part, and the optimal case, were considered. (author) 5 refs

17. EXISTENCE AND UNIQUENESS OF SOLUTIONS TO STOCHASTIC DIFFERENTIAL EQUATION WITH RANDOM COEFFICIENTS

Institute of Scientific and Technical Information of China (English)

2010-01-01

This paper mainly deals with a stochastic differential equation (SDE) with random coefficients. Sufficient conditions which guarantee the existence and uniqueness of solutions to the equation are given.

18. Random Coefficient Logit Model for Large Datasets

NARCIS (Netherlands)

C. Hernández-Mireles (Carlos); D. Fok (Dennis)

2010-01-01

textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

19. Interpretation of diffusion coefficients in nanostructured materials from random walk numerical simulation.

Science.gov (United States)

Anta, Juan A; Mora-Seró, Iván; Dittrich, Thomas; Bisquert, Juan

2008-08-14

We make use of the numerical simulation random walk (RWNS) method to compute the "jump" diffusion coefficient of electrons in nanostructured materials via mean-square displacement. First, a summary of analytical results is given that relates the diffusion coefficient obtained from RWNS to those in the multiple-trapping (MT) and hopping models. Simulations are performed in a three-dimensional lattice of trap sites with energies distributed according to an exponential distribution and with a step-function distribution centered at the Fermi level. It is observed that once the stationary state is reached, the ensemble of particles follow Fermi-Dirac statistics with a well-defined Fermi level. In this stationary situation the diffusion coefficient obeys the theoretical predictions so that RWNS effectively reproduces the MT model. Mobilities can be also computed when an electrical bias is applied and they are observed to comply with the Einstein relation when compared with steady-state diffusion coefficients. The evolution of the system towards the stationary situation is also studied. When the diffusion coefficients are monitored along simulation time a transition from anomalous to trap-limited transport is observed. The nature of this transition is discussed in terms of the evolution of electron distribution and the Fermi level. All these results will facilitate the use of RW simulation and related methods to interpret steady-state as well as transient experimental techniques.

20. Shear viscosity coefficient from microscopic models

International Nuclear Information System (INIS)

Muronga, Azwinndini

2004-01-01

The transport coefficient of shear viscosity is studied for a hadron matter through microscopic transport model, the ultrarelativistic quantum molecular dynamics (UrQMD), using the Green-Kubo formulas. Molecular-dynamical simulations are performed for a system of light mesons in a box with periodic boundary conditions. Starting from an initial state composed of π,η,ω,ρ,φ with a uniform phase-space distribution, the evolution takes place through elastic collisions, production, and annihilation. The system approaches a stationary state of mesons and their resonances, which is characterized by common temperature. After equilibration, thermodynamic quantities such as the energy density, particle density, and pressure are calculated. From such an equilibrated state the shear viscosity coefficient is calculated from the fluctuations of stress tensor around equilibrium using Green-Kubo relations. We do our simulations here at zero net baryon density so that the equilibration times depend on the energy density. We do not include hadron strings as degrees of freedom so as to maintain detailed balance. Hence we do not get the saturation of temperature but this leads to longer equilibration times

1. Application of QMC methods to PDEs with random coefficients : a survey of analysis and implementation

KAUST Repository

Kuo, Frances

2016-01-05

In this talk I will provide a survey of recent research efforts on the application of quasi-Monte Carlo (QMC) methods to PDEs with random coefficients. Such PDE problems occur in the area of uncertainty quantification. In recent years many papers have been written on this topic using a variety of methods. QMC methods are relatively new to this application area. I will consider different models for the randomness (uniform versus lognormal) and contrast different QMC algorithms (single-level versus multilevel, first order versus higher order, deterministic versus randomized). I will give a summary of the QMC error analysis and proof techniques in a unified view, and provide a practical guide to the software for constructing QMC points tailored to the PDE problems.

2. Partially linear varying coefficient models stratified by a functional covariate

KAUST Repository

Maity, Arnab; Huang, Jianhua Z.

2012-01-01

We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

3. The performance of random coefficient regression in accounting for residual confounding.

Science.gov (United States)

Gustafson, Paul; Greenland, Sander

2006-09-01

Greenland (2000, Biometrics 56, 915-921) describes the use of random coefficient regression to adjust for residual confounding in a particular setting. We examine this setting further, giving theoretical and empirical results concerning the frequentist and Bayesian performance of random coefficient regression. Particularly, we compare estimators based on this adjustment for residual confounding to estimators based on the assumption of no residual confounding. This devolves to comparing an estimator from a nonidentified but more realistic model to an estimator from a less realistic but identified model. The approach described by Gustafson (2005, Statistical Science 20, 111-140) is used to quantify the performance of a Bayesian estimator arising from a nonidentified model. From both theoretical calculations and simulations we find support for the idea that superior performance can be obtained by replacing unrealistic identifying constraints with priors that allow modest departures from those constraints. In terms of point-estimator bias this superiority arises when the extent of residual confounding is substantial, but the advantage is much broader in terms of interval estimation. The benefit from modeling residual confounding is maintained when the prior distributions employed only roughly correspond to reality, for the standard identifying constraints are equivalent to priors that typically correspond much worse.

4. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

DEFF Research Database (Denmark)

Brincker, Rune; Demosthenous, Milton; Manos, George C.

1994-01-01

The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective...... is to obtain an estimate of the free rocking response from the measured random response using the Random Decrement (RDD) Technique, and then estimate the coefficient of restitution from this free response estimate. In the paper this approach is investigated by simulating the response of a single degree...

5. Analysis and computation of the elastic wave equation with random coefficients

KAUST Repository

Motamed, Mohammad; Nobile, Fabio; Tempone, Raul

2015-01-01

We consider the stochastic initial-boundary value problem for the elastic wave equation with random coefficients and deterministic data. We propose a stochastic collocation method for computing statistical moments of the solution or statistics

6. Comparison of activity coefficient models for electrolyte systems

DEFF Research Database (Denmark)

Lin, Yi; ten Kate, Antoon; Mooijer, Miranda

2010-01-01

Three activity coefficient models for electrolyte solutions were evaluated and compared. The activity coefficient models are: The electrolyte NRTL model (ElecNRTL) by Aspentech, the mixed solvent electrolyte model (MSE) by OLI Systems Inc., and the Extended UNIQUAC model from the Technical Univer...

7. Dynamics analysis of SIR epidemic model with correlation coefficients and clustering coefficient in networks.

Science.gov (United States)

Zhang, Juping; Yang, Chan; Jin, Zhen; Li, Jia

2018-07-14

In this paper, the correlation coefficients between nodes in states are used as dynamic variables, and we construct SIR epidemic dynamic models with correlation coefficients by using the pair approximation method in static networks and dynamic networks, respectively. Considering the clustering coefficient of the network, we analytically investigate the existence and the local asymptotic stability of each equilibrium of these models and derive threshold values for the prevalence of diseases. Additionally, we obtain two equivalent epidemic thresholds in dynamic networks, which are compared with the results of the mean field equations. Copyright © 2018 Elsevier Ltd. All rights reserved.

8. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

DEFF Research Database (Denmark)

Brincker, Rune; Demosthenous, M.; Manos, G. C.

The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective is to ...... of freedom system loaded by white noise, estimating the coefficient of restitution as explained, and comparing the estimates with the value used in the simulations. Several estimates for the coefficient of restitution are considered, and reasonable results are achieved....

9. Adaptive Algebraic Multigrid for Finite Element Elliptic Equations with Random Coefficients

Energy Technology Data Exchange (ETDEWEB)

Kalchev, D

2012-04-02

This thesis presents a two-grid algorithm based on Smoothed Aggregation Spectral Element Agglomeration Algebraic Multigrid (SA-{rho}AMGe) combined with adaptation. The aim is to build an efficient solver for the linear systems arising from discretization of second-order elliptic partial differential equations (PDEs) with stochastic coefficients. Examples include PDEs that model subsurface flow with random permeability field. During a Markov Chain Monte Carlo (MCMC) simulation process, that draws PDE coefficient samples from a certain distribution, the PDE coefficients change, hence the resulting linear systems to be solved change. At every such step the system (discretized PDE) needs to be solved and the computed solution used to evaluate some functional(s) of interest that then determine if the coefficient sample is acceptable or not. The MCMC process is hence computationally intensive and requires the solvers used to be efficient and fast. This fact that at every step of MCMC the resulting linear system changes, makes an already existing solver built for the old problem perhaps not as efficient for the problem corresponding to the new sampled coefficient. This motivates the main goal of our study, namely, to adapt an already existing solver to handle the problem (with changed coefficient) with the objective to achieve this goal to be faster and more efficient than building a completely new solver from scratch. Our approach utilizes the local element matrices (for the problem with changed coefficients) to build local problems associated with constructed by the method agglomerated elements (a set of subdomains that cover the given computational domain). We solve a generalized eigenproblem for each set in a subspace spanned by the previous local coarse space (used for the old solver) and a vector, component of the error, that the old solver cannot handle. A portion of the spectrum of these local eigen-problems (corresponding to eigenvalues close to zero) form the

10. Formulae of differentiation for solving differential equations with complex-valued random coefficients

International Nuclear Information System (INIS)

Kim, Ki Hong; Lee, Dong Hun

1999-01-01

Generalizing the work of Shapiro and Loginov, we derive new formulae of differentiation useful for solving differential equations with complex-valued random coefficients. We apply the formulae to the quantum-mechanical problem of noninteracting electrons moving in a correlated random potential in one dimension

11. Coefficient of restitution of model repaired car body parts

OpenAIRE

2008-01-01

Purpose: The qualification of influence of model repaired car body parts on the value of coefficient of restitution and evaluation of impact energy absorption of model repaired car body parts.Design/methodology/approach: Investigation of plastic strain and coefficient of restitution of new and repaired model car body parts with using impact test machine for different impact energy.Findings: The results of investigations show that the value of coefficient of restitution changes with speed (ene...

12. Entropy Characterization of Random Network Models

Directory of Open Access Journals (Sweden)

Pedro J. Zufiria

2017-06-01

Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

13. Modified Regression Correlation Coefficient for Poisson Regression Model

Science.gov (United States)

Kaengthong, Nattacha; Domthong, Uthumporn

2017-09-01

This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

14. Estimating varying coefficients for partial differential equation models.

Science.gov (United States)

Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

2017-09-01

Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

15. Stochastic Modelling of the Diffusion Coefficient for Concrete

DEFF Research Database (Denmark)

Thoft-Christensen, Palle

In the paper, a new stochastic modelling of the diffusion coefficient D is presented. The modelling is based on physical understanding of the diffusion process and on some recent experimental results. The diffusion coefficients D is strongly dependent on the w/c ratio and the temperature....

16. Diffusion coefficients for multi-step persistent random walks on lattices

International Nuclear Information System (INIS)

Gilbert, Thomas; Sanders, David P

2010-01-01

We calculate the diffusion coefficients of persistent random walks on lattices, where the direction of a walker at a given step depends on the memory of a certain number of previous steps. In particular, we describe a simple method which enables us to obtain explicit expressions for the diffusion coefficients of walks with a two-step memory on different classes of one-, two- and higher dimensional lattices.

17. Diffusion coefficient adaptive correction in Lagrangian puff model

International Nuclear Information System (INIS)

Tan Wenji; Wang Dezhong; Ma Yuanwei; Ji Zhilong

2014-01-01

Lagrangian puff model is widely used in the decision support system for nuclear emergency management. The diffusion coefficient is one of the key parameters impacting puff model. An adaptive method was proposed in this paper, which could correct the diffusion coefficient in Lagrangian puff model, and it aimed to improve the accuracy of calculating the nuclide concentration distribution. This method used detected concentration data, meteorological data and source release data to estimate the actual diffusion coefficient with least square method. The diffusion coefficient adaptive correction method was evaluated by Kincaid data in MVK, and was compared with traditional Pasquill-Gifford (P-G) diffusion scheme method. The results indicate that this diffusion coefficient adaptive correction method can improve the accuracy of Lagrangian puff model. (authors)

18. Comparing coefficients of nested nonlinear probability models

DEFF Research Database (Denmark)

Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

2011-01-01

In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

19. Transmission coefficient and heat conduction of a harmonic chain with random masses

International Nuclear Information System (INIS)

Verheggen, T.

1979-01-01

We find upper and lower bounds for the transmission coefficient of a chain of random masses. Using these bounds we show that the heat conduction in such a chain does not obey Fourier's law: For different temperatures at the ends of a chain containing N particles the energy flux falls off like Nsup(-1/2) rather than N -1 . (orig.)

20. Overview of models allowing calculation of activity coefficients

Energy Technology Data Exchange (ETDEWEB)

Jaussaud, C.; Sorel, C

2004-07-01

Activity coefficients must be estimated to accurately quantify the extraction equilibrium involved in spent fuel reprocessing. For these calculations, binary data are required for each electrolyte over a concentration range sometimes exceeding the maximum solubility. The activity coefficients must be extrapolated to model the behavior of binary supersaturated aqueous solution. According to the bibliography, the most suitable models are based on the local composition concept. (authors)

1. Reactor kinetics revisited: a coefficient based model (CBM)

International Nuclear Information System (INIS)

Ratemi, W.M.

2011-01-01

In this paper, a nuclear reactor kinetics model based on Guelph expansion coefficients calculation ( Coefficients Based Model, CBM), for n groups of delayed neutrons is developed. The accompanying characteristic equation is a polynomial form of the Inhour equation with the same coefficients of the CBM- kinetics model. Those coefficients depend on Universal abc- values which are dependent on the type of the fuel fueling a nuclear reactor. Furthermore, such coefficients are linearly dependent on the inserted reactivity. In this paper, the Universal abc- values have been presented symbolically, for the first time, as well as with their numerical values for U-235 fueled reactors for one, two, three, and six groups of delayed neutrons. Simulation studies for constant and variable reactivity insertions are made for the CBM kinetics model, and a comparison of results, with numerical solutions of classical kinetics models for one, two, three, and six groups of delayed neutrons are presented. The results show good agreements, especially for single step insertion of reactivity, with the advantage of the CBM- solution of not encountering the stiffness problem accompanying the numerical solutions of the classical kinetics model. (author)

2. Bounds and Estimates for Transport Coefficients of Random and Porous Media with High Contrasts

International Nuclear Information System (INIS)

Berryman, J G

2004-01-01

Bounds on transport coefficients of random polycrystals of laminates are presented, including the well-known Hashin-Shtrikman bounds and some newly formulated bounds involving two formation factors for a two-component porous medium. Some new types of self-consistent estimates are then formulated based on the observed analytical structure both of these bounds and also of earlier self-consistent estimates (of the CPA or coherent potential approximation type). A numerical study is made, assuming first that the internal structure (i.e., the laminated grain structure) is not known, and then that it is known. The purpose of this aspect of the study is to attempt to quantify the differences in the predictions of properties of a system being modeled when such organized internal structure is present in the medium but detailed spatial correlation information may or (more commonly) may not be available. Some methods of estimating formation factors from data are also presented and then applied to a high-contrast fluid-permeability data set. Hashin-Shtrikman bounds are found to be very accurate estimates for low contrast heterogeneous media. But formation factor lower bounds are superior estimates for high contrast situations. The new self-consistent estimators also tend to agree better with data than either the bounds or the CPA estimates, which themselves tend to overestimate values for high contrast conducting composites

3. Partially linear varying coefficient models stratified by a functional covariate

KAUST Repository

Maity, Arnab

2012-10-01

We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.

4. Analysis and implementation issues for the numerical approximation of parabolic equations with random coefficients

KAUST Repository

Nobile, Fabio; Tempone, Raul

2009-01-01

We consider the problem of numerically approximating statistical moments of the solution of a time- dependent linear parabolic partial differential equation (PDE), whose coefficients and/or forcing terms are spatially correlated random fields. The stochastic coefficients of the PDE are approximated by truncated Karhunen-Loève expansions driven by a finite number of uncorrelated random variables. After approxi- mating the stochastic coefficients, the original stochastic PDE turns into a new deterministic parametric PDE of the same type, the dimension of the parameter set being equal to the number of random variables introduced. After proving that the solution of the parametric PDE problem is analytic with respect to the parameters, we consider global polynomial approximations based on tensor product, total degree or sparse polynomial spaces and constructed by either a Stochastic Galerkin or a Stochastic Collocation approach. We derive convergence rates for the different cases and present numerical results that show how these approaches are a valid alternative to the more traditional Monte Carlo Method for this class of problems. © 2009 John Wiley & Sons, Ltd.

5. Analysis and implementation issues for the numerical approximation of parabolic equations with random coefficients

KAUST Repository

Nobile, Fabio

2009-11-05

We consider the problem of numerically approximating statistical moments of the solution of a time- dependent linear parabolic partial differential equation (PDE), whose coefficients and/or forcing terms are spatially correlated random fields. The stochastic coefficients of the PDE are approximated by truncated Karhunen-Loève expansions driven by a finite number of uncorrelated random variables. After approxi- mating the stochastic coefficients, the original stochastic PDE turns into a new deterministic parametric PDE of the same type, the dimension of the parameter set being equal to the number of random variables introduced. After proving that the solution of the parametric PDE problem is analytic with respect to the parameters, we consider global polynomial approximations based on tensor product, total degree or sparse polynomial spaces and constructed by either a Stochastic Galerkin or a Stochastic Collocation approach. We derive convergence rates for the different cases and present numerical results that show how these approaches are a valid alternative to the more traditional Monte Carlo Method for this class of problems. © 2009 John Wiley & Sons, Ltd.

6. Van der Waals coefficients beyond the classical shell model

Energy Technology Data Exchange (ETDEWEB)

Tao, Jianmin, E-mail: jianmint@sas.upenn.edu [Department of Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6323 (United States); Fang, Yuan; Hao, Pan [Department of Physics and Engineering Physics, Tulane University, New Orleans, Louisiana 70118 (United States); Scuseria, G. E. [Department of Chemistry and Department of Physics and Astronomy, Rice University, Houston, Texas 77251-1892, USA and Department of Chemistry, Faculty of Science, King Abdulaziz University, Jeddah 21589 (Saudi Arabia); Ruzsinszky, Adrienn; Perdew, John P. [Department of Physics, Temple University, Philadelphia, Pennsylvania 19122 (United States)

2015-01-14

Van der Waals (vdW) coefficients can be accurately generated and understood by modelling the dynamic multipole polarizability of each interacting object. Accurate static polarizabilities are the key to accurate dynamic polarizabilities and vdW coefficients. In this work, we present and study in detail a hollow-sphere model for the dynamic multipole polarizability proposed recently by two of the present authors (JT and JPP) to simulate the vdW coefficients for inhomogeneous systems that allow for a cavity. The inputs to this model are the accurate static multipole polarizabilities and the electron density. A simplification of the full hollow-sphere model, the single-frequency approximation (SFA), circumvents the need for a detailed electron density and for a double numerical integration over space. We find that the hollow-sphere model in SFA is not only accurate for nanoclusters and cage molecules (e.g., fullerenes) but also yields vdW coefficients among atoms, fullerenes, and small clusters in good agreement with expensive time-dependent density functional calculations. However, the classical shell model (CSM), which inputs the static dipole polarizabilities and estimates the static higher-order multipole polarizabilities therefrom, is accurate for the higher-order vdW coefficients only when the interacting objects are large. For the lowest-order vdW coefficient C{sub 6}, SFA and CSM are exactly the same. The higher-order (C{sub 8} and C{sub 10}) terms of the vdW expansion can be almost as important as the C{sub 6} term in molecular crystals. Application to a variety of clusters shows that there is strong non-additivity of the long-range vdW interactions between nanoclusters.

7. Rock shape, restitution coefficients and rockfall trajectory modelling

Science.gov (United States)

Glover, James; Christen, Marc; Bühler, Yves; Bartelt, Perry

2014-05-01

Restitution coefficients are used in rockfall trajectory modelling to describe the ratio between incident and rebound velocities during ground impact. They are central to the problem of rockfall hazard analysis as they link rock mass characteristics to terrain properties. Using laboratory experiments as a guide, we first show that restitution coefficients exhibit a wide range of scatter, although the material properties of the rock and ground are constant. This leads us to the conclusion that restitution coefficients are poor descriptors of rock-ground interaction. The primary problem is that "apparent" restitution coefficients are applied at the rock's centre-of-mass and do not account for rock shape. An accurate description of the rock-ground interaction requires the contact forces to be applied at the rock surface with consideration of the momentary rock position and spin. This leads to a variety of rock motions including bouncing, sliding, skipping and rolling. Depending on the impact configuration a wide range of motions is possible. This explains the large scatter of apparent restitution coefficients. We present a rockfall model based on newly developed hard-contact algorithms which includes the effects of rock shape and therefore is able to reproduce the results of different impact configurations. We simulate the laboratory experiments to show that it is possible to reproduce run-out and dispersion of different rock shapes using parameters obtained from independent tests. Although this is a step forward in rockfall trajectory modelling, the problem of parametersing real terrain remains.

8. Modelling the change in the oxidation coefficient during the aerobic ...

African Journals Online (AJOL)

In this work the aerobic degradation of phenol by acclimated activated sludge was studied. Results demonstrate that while the phenol removal rate by acclimated activated sludge follows the Monod model, the oxygen uptake rate obeys a Haldane-type equation. The phenol oxidation coefficient obtained at different intial ...

9. Mechanistic model for dispersion coefficients in bubble column

CSIR Research Space (South Africa)

Skosana, PJ

2015-05-01

Full Text Available predicts axial and radial dispersion coefficients that are of the same order of magnitude as the reported data. Whereas the model is based on a description of the underlying physical phenomena, its validity and extrapolation is expected to be more reliable...

10. Modification of van La ar activity coefficient model

International Nuclear Information System (INIS)

Vakili-Nezhaad, G. R.; Modarress, H.; Mansoori, G. A.

2001-01-01

Based on statistical and mechanical arguments, the original van La ar activity coefficient model has been improved by reasonable assumptions. This modifications has been done by replacing the van der Waals equation of state with the Redlich-K wong equation of state in the formulation of van La ar with consistent mixing rules for the energy and volume parameters of this equation of state (a mix , b mix ). Other equations of state, such as the Soave modification of the Redlich-K wong equation of state, P eng-Robinson and Mohsen-Nia, Modarress and Mansoori equations of state, have been introduced in the formulation of van La ar for the activity coefficients of the components present in the binary liquid mixtures, and their effects on the accuracy of the resultant activity coefficient models have been examined. The results of these revised models have been compared with the experimental data and it was found that the Redlich-K wong equation of state with the van der Waals mixing rules for the volume and energy parameters of this equation, is the best choice among these equations of state. In addition, it can improve the original van La ar activity coefficient model and, therefore a better agreement with the experimental data is obtained

11. Application of QMC methods to PDEs with random coefficients : a survey of analysis and implementation

KAUST Repository

Kuo, Frances; Dick, Josef; Le Gia, Thong; Nichols, James; Sloan, Ian; Graham, Ivan; Scheichl, Robert; Nuyens, Dirk; Schwab, Christoph

2016-01-01

have been written on this topic using a variety of methods. QMC methods are relatively new to this application area. I will consider different models for the randomness (uniform versus lognormal) and contrast different QMC algorithms (single-level

12. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

Science.gov (United States)

Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

2017-01-01

The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

13. The influence of numerical models on determining the drag coefficient

Directory of Open Access Journals (Sweden)

Dobeš Josef

2014-03-01

Full Text Available The paper deals with numerical modelling of body aerodynamic drag coefficient in the transition from laminar to turbulent flow regimes, where the selection of a suitable numerical model is problematic. On the basic problem of flow around a simple body – sphere selected computational models are tested. The values obtained by numerical simulations of drag coefficients of each model are compared with the graph of dependency of the drag coefficient vs. Reynolds number for a sphere. Next the dependency of Strouhal number vs. Reynolds number is evaluated, where the vortex shedding frequency values for given speed are obtained numerically and experimentally and then the values are compared for each numerical model and experiment. The aim is to specify trends for the selection of appropriate numerical model for flow around bodies problem in which the precise description of the flow field around the obstacle is used to define the acoustic noise source. Numerical modelling is performed by finite volume method using CFD code.

14. Distributing Correlation Coefficients of Linear Structure-Activity/Property Models

Directory of Open Access Journals (Sweden)

Sorana D. BOLBOACA

2011-12-01

Full Text Available Quantitative structure-activity/property relationships are mathematical relationships linking chemical structure and activity/property in a quantitative manner. These in silico approaches are frequently used to reduce animal testing and risk-assessment, as well as to increase time- and cost-effectiveness in characterization and identification of active compounds. The aim of our study was to investigate the pattern of correlation coefficients distribution associated to simple linear relationships linking the compounds structure with their activities. A set of the most common ordnance compounds found at naval facilities with a limited data set with a range of toxicities on aquatic ecosystem and a set of seven properties was studied. Statistically significant models were selected and investigated. The probability density function of the correlation coefficients was investigated using a series of possible continuous distribution laws. Almost 48% of the correlation coefficients proved fit Beta distribution, 40% fit Generalized Pareto distribution, and 12% fit Pert distribution.

15. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

Science.gov (United States)

Wang, D.; Cui, Y.

2015-12-01

The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

16. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

Science.gov (United States)

Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

2013-09-01

In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

17. Vertical random variability of the distribution coefficient in the soil and its effect on the migration of fallout radionuclides

International Nuclear Information System (INIS)

Bunzl, K.

2002-01-01

In the field, the distribution coefficient, K d , for the sorption of a radionuclide by the soil cannot be expected to be constant. Even in a well defined soil horizon, K d will vary stochastically in horizontal as well as in vertical direction around a mean value. The horizontal random variability of K d produce a pronounced tailing effect in the concentration depth profile of a fallout radionuclide, much less is known on the corresponding effect of the vertical random variability. To analyze this effect theoretically, the classical convection-dispersion model in combination with the random-walk particle method was applied. The concentration depth profile of a radionuclide was calculated one year after deposition assuming constant values of the pore water velocity, the diffusion/dispersion coefficient, and the distribution coefficient (K d = 100 cm 3 x g -1 ) and exhibiting a vertical variability for K d according to a log-normal distribution with a geometric mean of 100 cm 3 x g -1 and a coefficient of variation of CV 0.53. The results show that these two concentration depth profiles are only slightly different, the location of the peak is shifted somewhat upwards, and the dispersion of the concentration depth profile is slightly larger. A substantial tailing effect of the concentration depth profile is not perceivable. Especially with respect to the location of the peak, a very good approximation of the concentration depth profile is obtained if the arithmetic mean of the K d -values (K d = 113 cm 3 x g -1 ) and a slightly increased dispersion coefficient are used in the analytical solution of the classical convection-dispersion equation with constant K d . The evaluation of the observed concentration depth profile with the analytical solution of the classical convection-dispersion equation with constant parameters will, within the usual experimental limits, hardly reveal the presence of a log-normal random distribution of K d in the vertical direction in

18. Basis adaptation and domain decomposition for steady-state partial differential equations with random coefficients

Energy Technology Data Exchange (ETDEWEB)

Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

2017-12-01

We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.

19. Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDES with random coefficients

KAUST Repository

Beck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

2014-01-01

In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.

20. Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDES with random coefficients

KAUST Repository

Beck, Joakim

2014-03-01

In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.

1. Regression Models for Predicting Force Coefficients of Aerofoils

Directory of Open Access Journals (Sweden)

Mohammed ABDUL AKBAR

2015-09-01

Full Text Available Renewable sources of energy are attractive and advantageous in a lot of different ways. Among the renewable energy sources, wind energy is the fastest growing type. Among wind energy converters, Vertical axis wind turbines (VAWTs have received renewed interest in the past decade due to some of the advantages they possess over their horizontal axis counterparts. VAWTs have evolved into complex 3-D shapes. A key component in predicting the output of VAWTs through analytical studies is obtaining the values of lift and drag coefficients which is a function of shape of the aerofoil, ‘angle of attack’ of wind and Reynolds’s number of flow. Sandia National Laboratories have carried out extensive experiments on aerofoils for the Reynolds number in the range of those experienced by VAWTs. The volume of experimental data thus obtained is huge. The current paper discusses three Regression analysis models developed wherein lift and drag coefficients can be found out using simple formula without having to deal with the bulk of the data. Drag coefficients and Lift coefficients were being successfully estimated by regression models with R2 values as high as 0.98.

2. Modeling the Design Flow Coefficient of a Centrifugal Compressor Impeller

Directory of Open Access Journals (Sweden)

A. A. Drozdov

2017-01-01

3. Guideline for Adopting the Local Reaction Assumption for Porous Absorbers in Terms of Random Incidence Absorption Coefficients

DEFF Research Database (Denmark)

Jeong, Cheol-Ho

2011-01-01

resistivity and the absorber thickness on the difference between the two surface reaction models are examined and discussed. For a porous absorber backed by a rigid surface, the assumption of local reaction always underestimates the random incidence absorption coefficient and the local reaction models give...... incidence acoustical characteristics of typical building elements made of porous materials assuming extended and local reaction. For each surface reaction, five well-established wave propagation models, the Delany-Bazley, Miki, Beranek, Allard-Champoux, and Biot model, are employed. Effects of the flow...... errors of less than 10% if the thickness exceeds 120 mm for a flow resistivity of 5000 Nm-4s. As the flow resistivity doubles, a decrease in the required thickness by 25 mm is observed to achieve the same amount of error. For an absorber backed by an air gap, the thickness ratio between the material...

4. The coefficient of restitution of pressurized balls: a mechanistic model

Science.gov (United States)

Georgallas, Alex; Landry, Gaëtan

2016-01-01

Pressurized, inflated balls used in professional sports are regulated so that their behaviour upon impact can be anticipated and allow the game to have its distinctive character. However, the dynamics governing the impacts of such balls, even on stationary hard surfaces, can be extremely complex. The energy transformations, which arise from the compression of the gas within the ball and from the shear forces associated with the deformation of the wall, are examined in this paper. We develop a simple mechanistic model of the dependence of the coefficient of restitution, e, upon both the gauge pressure, P_G, of the gas and the shear modulus, G, of the wall. The model is validated using the results from a simple series of experiments using three different sports balls. The fits to the data are extremely good for P_G > 25 kPa and consistent values are obtained for the value of G for the wall material. As far as the authors can tell, this simple, mechanistic model of the pressure dependence of the coefficient of restitution is the first in the literature. *%K Coefficient of Restitution, Dynamics, Inflated Balls, Pressure, Impact Model

5. Measurement and modeling of interface heat transfer coefficients

International Nuclear Information System (INIS)

Rollett, A.D.; Lewis, H.D.; Dunn, P.S.

1985-01-01

The results of preliminary work on the modeling and measurement of the heat transfer coefficients of metal/mold interfaces is reported. The system investigated is the casting of uranium in graphite molds. The motivation for the work is primarily to improve the accuracy of process modeling of prototype mold designs at the Los Alamos Foundry. The evolution in design of a suitable mold for unidirectional solidification is described, illustrating the value of simulating mold designs prior to use. Experiment indicated a heat transfer coefficient of 2 kW/m 2 /K both with and without superheat. It was possible to distinguish between solidification due to the mold and that due to radiative heat loss. This permitted an experimental estimate of the emissivity, epsilon = 0.2, of the solidified metal

6. Tensor models, Kronecker coefficients and permutation centralizer algebras

Science.gov (United States)

Geloun, Joseph Ben; Ramgoolam, Sanjaye

2017-11-01

We show that the counting of observables and correlators for a 3-index tensor model are organized by the structure of a family of permutation centralizer algebras. These algebras are shown to be semi-simple and their Wedderburn-Artin decompositions into matrix blocks are given in terms of Clebsch-Gordan coefficients of symmetric groups. The matrix basis for the algebras also gives an orthogonal basis for the tensor observables which diagonalizes the Gaussian two-point functions. The centres of the algebras are associated with correlators which are expressible in terms of Kronecker coefficients (Clebsch-Gordan multiplicities of symmetric groups). The color-exchange symmetry present in the Gaussian model, as well as a large class of interacting models, is used to refine the description of the permutation centralizer algebras. This discussion is extended to a general number of colors d: it is used to prove the integrality of an infinite family of number sequences related to color-symmetrizations of colored graphs, and expressible in terms of symmetric group representation theory data. Generalizing a connection between matrix models and Belyi maps, correlators in Gaussian tensor models are interpreted in terms of covers of singular 2-complexes. There is an intriguing difference, between matrix and higher rank tensor models, in the computational complexity of superficially comparable correlators of observables parametrized by Young diagrams.

7. Application of random regression models to the genetic evaluation ...

African Journals Online (AJOL)

The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

8. Modeling maximum daily temperature using a varying coefficient regression model

Science.gov (United States)

Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

2014-01-01

Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

9. Quasi optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

KAUST Repository

Tamellini, Lorenzo

2016-01-05

In this talk we discuss possible strategies to minimize the impact of the curse of dimensionality effect when building sparse-grid approximations of a multivariate function u = u(y1, ..., yN ). More precisely, we present a knapsack approach , in which we estimate the cost and the error reduction contribution of each possible component of the sparse grid, and then we choose the components with the highest error reduction /cost ratio. The estimates of the error reduction are obtained by either a mixed a-priori / a-posteriori approach, in which we first derive a theoretical bound and then tune it with some inexpensive auxiliary computations (resulting in the so-called quasi-optimal sparse grids ), or by a fully a-posteriori approach (obtaining the so-called adaptive sparse grids ). This framework is very general and can be used to build quasi-optimal/adaptive sparse grids on bounded and unbounded domains (e.g. u depending on uniform and normal random distributions for yn), using both nested and non-nested families of univariate collocation points. We present some theoretical convergence results as well as numerical results showing the efficiency of the proposed approach for the approximation of the solution of elliptic PDEs with random diffusion coefficients. In this context, to treat the case of rough permeability fields in which a sparse grid approach may not be suitable, we propose to use the sparse grids as a control variate in a Monte Carlo simulation.

10. Measurements and modeling of gain coefficients for neodymium laser glasses

International Nuclear Information System (INIS)

Linford, G.J.; Saroyan, R.A.; Trenholme, J.B.; Weber, M.J.

1979-01-01

Small-signal gain coefficients are reported for neodymium in silicate, phosphate, fluorophosphate, and fluoroberyllate laser glasses. Measurements were made in a disk amplifier under identical conditions. Using spectroscopic data as the input, amplifier gain is calculated as a fucntion of flashlamp energy, pumping pulse duration, disk thickness, and Nd-doping. The agreement between predicted and measured gains is generally with ;plus or minus;10 percent, consistent with experimental uncertainties in the model and the parameters used. The operating conditions which optimize amplifier performance and efficiency for a given laser glass may be found using spectroscopic data alone. This process can be extended to derive the most cost-effective staging of amplifier chains for fusion lasers. A discussion of the model and examples of calculations are presented

11. Measurement of model coefficients of skin sympathetic vasoconstriction

International Nuclear Information System (INIS)

Severens, Natascha M W; Van Marken Lichtenbelt, Wouter D; Frijns, Arjan J H; Kingma, Boris R M; De Mol, Bas A J M; Van Steenhoven, Anton A

2010-01-01

Many researchers have already attempted to model vasoconstriction responses, commonly using the mathematical representation proposed by Stolwijk (1971 NASA Contractor Report CR-1855 (Washington, DC: NASA)). Model makers based the parameter values in this formulation either on estimations or by attributing the difference between their passive models and measurement data fully to thermoregulation. These methods are very sensitive to errors. This study aims to present a reliable method for determining physiological values in the vasoconstriction formulation. An experimental protocol was developed that enabled us to derive the local proportional amplification coefficients of the toe, leg and arm and the transient vasoconstrictor tone. Ten subjects participated in a cooling experiment. During the experiment, core temperature, skin temperature, skin perfusion, forearm blood flow and heart rate variability were measured. The contributions to the normalized amplification coefficient for vasoconstriction of the toe, leg and arm were 84%, 11% and 5%, respectively. Comparison with relative values in the literature showed that the estimated values of Stolwijk and the values mentioned by Tanabe et al (2002 Energy Build. 34 637–46) were comparable with our measured values, but the values of Gordon (1974 The response of a human temperature regulatory system model in the cold PhD Thesis University of California, Santa Barbara) and Fiala et al (2001 Int. J. Biometeorol. 45 143159) differed significantly. With the help of regression analysis a relation was formulated between the error signal of the standardized core temperature and the vasoconstrictor tone. This relation was formulated in a general applicable way, which means that it can be used for situations where vasoconstriction thresholds are shifted, like under anesthesia or during motion sickness

12. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

International Nuclear Information System (INIS)

Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

2006-01-01

Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

13. Alternative model of random surfaces

International Nuclear Information System (INIS)

Ambartzumian, R.V.; Sukiasian, G.S.; Savvidy, G.K.; Savvidy, K.G.

1992-01-01

We analyse models of triangulated random surfaces and demand that geometrically nearby configurations of these surfaces must have close actions. The inclusion of this principle drives us to suggest a new action, which is a modified Steiner functional. General arguments, based on the Minkowski inequality, shows that the maximal distribution to the partition function comes from surfaces close to the sphere. (orig.)

14. Randomized Item Response Theory Models

NARCIS (Netherlands)

Fox, Gerardus J.A.

2005-01-01

The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

15. Analysis and computation of the elastic wave equation with random coefficients

KAUST Repository

2015-10-21

We consider the stochastic initial-boundary value problem for the elastic wave equation with random coefficients and deterministic data. We propose a stochastic collocation method for computing statistical moments of the solution or statistics of some given quantities of interest. We study the convergence rate of the error in the stochastic collocation method. In particular, we show that, the rate of convergence depends on the regularity of the solution or the quantity of interest in the stochastic space, which is in turn related to the regularity of the deterministic data in the physical space and the type of the quantity of interest. We demonstrate that a fast rate of convergence is possible in two cases: for the elastic wave solutions with high regular data; and for some high regular quantities of interest even in the presence of low regular data. We perform numerical examples, including a simplified earthquake, which confirm the analysis and show that the collocation method is a valid alternative to the more traditional Monte Carlo sampling method for approximating quantities with high stochastic regularity.

16. Random Intercept and Random Slope 2-Level Multilevel Models

Directory of Open Access Journals (Sweden)

2012-11-01

Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

17. Evaluation Procedures of Random Uncertainties in Theoretical Calculations of Cross Sections and Rate Coefficients

International Nuclear Information System (INIS)

Kokoouline, V.; Richardson, W.

2014-01-01

Uncertainties in theoretical calculations may include: • Systematic uncertainty: Due to applicability limits of the chosen model. • Random: Within a model, uncertainties of model parameters result in uncertainties of final results (such as cross sections). • If uncertainties of experimental and theoretical data are known, for the purpose of data evaluation (to produce recommended data), one should combine two data sets to produce the best guess data with the smallest possible uncertainty. In many situations, it is possible to assess the accuracy of theoretical calculations because theoretical models usually rely on parameters that are uncertain, but not completely random, i.e. the uncertainties of the parameters of the models are approximately known. If there are one or several such parameters with corresponding uncertainties, even if some or all parameters are correlated, the above approach gives a conceptually simple way to calculate uncertainties of final cross sections (uncertainty propagation). Numerically, the statistical approach to the uncertainty propagation could be computationally expensive. However, in situations, where uncertainties are considered to be as important as the actual cross sections (for data validation or benchmark calculations, for example), such a numerical effort is justified. Having data from different sources (say, from theory and experiment), a systematic statistical approach allows one to compare the data and produce “unbiased” evaluated data with improved uncertainties, if uncertainties of initial data from different sources are available. Without uncertainties, the data evaluation/validation becomes impossible. This is the reason why theoreticians should assess the accuracy of their calculations in one way or another. A statistical and systematic approach, similar to the described above, is preferable.

18. Monte Carlo Finite Volume Element Methods for the Convection-Diffusion Equation with a Random Diffusion Coefficient

Directory of Open Access Journals (Sweden)

Qian Zhang

2014-01-01

Full Text Available The paper presents a framework for the construction of Monte Carlo finite volume element method (MCFVEM for the convection-diffusion equation with a random diffusion coefficient, which is described as a random field. We first approximate the continuous stochastic field by a finite number of random variables via the Karhunen-Loève expansion and transform the initial stochastic problem into a deterministic one with a parameter in high dimensions. Then we generate independent identically distributed approximations of the solution by sampling the coefficient of the equation and employing finite volume element variational formulation. Finally the Monte Carlo (MC method is used to compute corresponding sample averages. Statistic error is estimated analytically and experimentally. A quasi-Monte Carlo (QMC technique with Sobol sequences is also used to accelerate convergence, and experiments indicate that it can improve the efficiency of the Monte Carlo method.

19. Exact solutions to a nonlinear dispersive model with variable coefficients

International Nuclear Information System (INIS)

Yin Jun; Lai Shaoyong; Qing Yin

2009-01-01

A mathematical technique based on an auxiliary differential equation and the symbolic computation system Maple is employed to investigate a prototypical and nonlinear K(n, n) equation with variable coefficients. The exact solutions to the equation are constructed analytically under various circumstances. It is shown that the variable coefficients and the exponent appearing in the equation determine the quantitative change in the physical structures of the solutions.

20. Global industrial impact coefficient based on random walk process and inter-country input-output table

Science.gov (United States)

Xing, Lizhi; Dong, Xianlei; Guan, Jun

2017-04-01

Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.

1. Smooth random change point models.

Science.gov (United States)

van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

2011-03-15

Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

2. Backward Stochastic Riccati Equations and Infinite Horizon L-Q Optimal Control with Infinite Dimensional State Space and Random Coefficients

International Nuclear Information System (INIS)

Guatteri, Giuseppina; Tessitore, Gianmario

2008-01-01

We study the Riccati equation arising in a class of quadratic optimal control problems with infinite dimensional stochastic differential state equation and infinite horizon cost functional. We allow the coefficients, both in the state equation and in the cost, to be random.In such a context backward stochastic Riccati equations are backward stochastic differential equations in the whole positive real axis that involve quadratic non-linearities and take values in a non-Hilbertian space. We prove existence of a minimal non-negative solution and, under additional assumptions, its uniqueness. We show that such a solution allows to perform the synthesis of the optimal control and investigate its attractivity properties. Finally the case where the coefficients are stationary is addressed and an example concerning a controlled wave equation in random media is proposed

3. Modeling Concordance Correlation Coefficient for Longitudinal Study Data

Science.gov (United States)

Ma, Yan; Tang, Wan; Yu, Qin; Tu, X. M.

2010-01-01

Measures of agreement are used in a wide range of behavioral, biomedical, psychosocial, and health-care related research to assess reliability of diagnostic test, psychometric properties of instrument, fidelity of psychosocial intervention, and accuracy of proxy outcome. The concordance correlation coefficient (CCC) is a popular measure of…

4. Optimized Finite-Difference Coefficients for Hydroacoustic Modeling

Science.gov (United States)

Preston, L. A.

2014-12-01

Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

5. The cluster model and the generalized Brody-Moshinsky coefficients

International Nuclear Information System (INIS)

Silvestre-Brac, B.

1985-01-01

Cluster theories, which rigorously eliminate the centre of mass motion, need intrinsic cluster coordinates. It is shown that the Jacobi coordinates of the various clusters are related by an orthogonal transformation and that the use of generalized Brody-Moshinsky coefficients allows an exact calculation of the exchange kernels. This procedure is illustrated by the description of nucleon-nucleon interaction in terms of constituent quarks

6. Comparison of Experimental Methods for Estimating Matrix Diffusion Coefficients for Contaminant Transport Modeling

Energy Technology Data Exchange (ETDEWEB)

Telfeyan, Katherine Christina [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay Hanson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2017-11-06

Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

7. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

Science.gov (United States)

Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.

2018-02-01

Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

8. Comparing Regression Coefficients between Nested Linear Models for Clustered Data with Generalized Estimating Equations

Science.gov (United States)

Yan, Jun; Aseltine, Robert H., Jr.; Harel, Ofer

2013-01-01

Comparing regression coefficients between models when one model is nested within another is of great practical interest when two explanations of a given phenomenon are specified as linear models. The statistical problem is whether the coefficients associated with a given set of covariates change significantly when other covariates are added into…

9. Generalization of Random Intercept Multilevel Models

Directory of Open Access Journals (Sweden)

2013-10-01

Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.

10. Permeability of model porous medium formed by random discs

Science.gov (United States)

Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.

2018-03-01

Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.

11. Polynomial Chaos Expansion of Random Coefficients and the Solution of Stochastic Partial Differential Equations in the Tensor Train Format

KAUST Repository

Dolgov, Sergey

2015-11-03

We apply the tensor train (TT) decomposition to construct the tensor product polynomial chaos expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, and exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. In addition, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its postprocessing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.

12. Modelling research on determining shape coefficients for subdivision interpretation in γ-ray spectral logging

International Nuclear Information System (INIS)

Yin Wangming; She Guanjun; Tang Bin

2011-01-01

This paper first describes the physical meaning of the shape coefficients in the subdivision interpretation of γ-ray logging; then discusses the theory, method to determine the practical shape coefficients with logging model and defines the formula to approximately calculate the coefficients. A great deal of experimental work has been preformed with a HPGe γ-ray spectrometer and reached satisfied result which has validated the effeciency of the modelling method. (authors)

13. A Correction of Random Incidence Absorption Coefficients for the Angular Distribution of Acoustic Energy under Measurement Conditions

DEFF Research Database (Denmark)

Jeong, Cheol-Ho

2009-01-01

Most acoustic measurements are based on an assumption of ideal conditions. One such ideal condition is a diffuse and reverberant field. In practice, a perfectly diffuse sound field cannot be achieved in a reverberation chamber. Uneven incident energy density under measurement conditions can cause...... discrepancies between the measured value and the theoretical random incidence absorption coefficient. Therefore the angular distribution of the incident acoustic energy onto an absorber sample should be taken into account. The angular distribution of the incident energy density was simulated using the beam...... tracing method for various room shapes and source positions. The averaged angular distribution is found to be similar to a Gaussian distribution. As a result, an angle-weighted absorption coefficient was proposed by considering the angular energy distribution to improve the agreement between...

14. Implementation of optimal Galerkin and Collocation approximations of PDEs with Random Coefficients

KAUST Repository

Beck, Joakim

2011-12-22

In this work we first focus on the Stochastic Galerkin approximation of the solution u of an elliptic stochastic PDE. We rely on sharp estimates for the decay of the coefficients of the spectral expansion of u on orthogonal polynomials to build a sequence of polynomial subspaces that features better convergence properties compared to standard polynomial subspaces such as Total Degree or Tensor Product. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new effective class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids.

15. Dynamics Coefficient for Two-Phase Soil Model

Directory of Open Access Journals (Sweden)

Wrana Bogumił

2015-02-01

Full Text Available The paper investigates a description of energy dissipation within saturated soils-diffusion of pore-water. Soils are assumed to be two-phase poro-elastic materials, the grain skeleton of which exhibits no irreversible behavior or structural hysteretic damping. Description of motion and deformation of soil is introduced as a system of equations consisting of governing dynamic consolidation equations based on Biot theory. Selected constitutive and kinematic relations for small strains and rotation are used. This paper derives a closed form of analytical solution that characterizes the energy dissipation during steady-state vibrations of nearly and fully saturated poro-elastic columns. Moreover, the paper examines the influence of various physical factors on the fundamental period, maximum amplitude and the fraction of critical damping of the Biot column. Also the so-called dynamic coefficient which shows amplification or attenuation of dynamic response is considered.

16. Infinite Random Graphs as Statistical Mechanical Models

DEFF Research Database (Denmark)

Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

2011-01-01

We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...

17. Modeling diffusion coefficients in binary mixtures of polar and non-polar compounds

DEFF Research Database (Denmark)

Medvedev, Oleg; Shapiro, Alexander

2005-01-01

The theory of transport coefficients in liquids, developed previously, is tested on a description of the diffusion coefficients in binary polar/non-polar mixtures, by applying advanced thermodynamic models. Comparison to a large set of experimental data shows good performance of the model. Only f...

18. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

International Nuclear Information System (INIS)

Lychak, Oleh V; Holyns’kiy, Ivan S

2016-01-01

The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen. (paper)

19. Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows

Science.gov (United States)

Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William

2015-01-01

The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.

20. Model-supported selection of distribution coefficients for performance assessment

International Nuclear Information System (INIS)

Ochs, M.; Lothenbach, B.; Shibata, Hirokazu; Yui, Mikazu

1999-01-01

A thermodynamic speciation/sorption model is used to illustrate typical problems encountered in the extrapolation of batch-type K d values to repository conditions. For different bentonite-groundwater systems, the composition of the corresponding equilibrium solutions and the surface speciation of the bentonite is calculated by treating simultaneously solution equilibria of soluble components of the bentonite as well as ion exchange and acid/base reactions at the bentonite surface. K d values for Cs, Ra, and Ni are calculated by implementing the appropriate ion exchange and surface complexation equilibria in the bentonite model. Based on this approach, hypothetical batch experiments are contrasted with expected conditions in compacted backfill. For each of these scenarios, the variation of K d values as a function of groundwater composition is illustrated for Cs, Ra, and Ni. The applicability of measured, batch-type K d values to repository conditions is discussed. (author)

1. Modelling the light absorption coefficients of oceanic waters: Implications for underwater optical applications

Science.gov (United States)

Prabhakaran, Sai Shri; Sahu, Sanjay Kumar; Dev, Pravin Jeba; Shanmugam, Palanisamy

2018-05-01

Spectral absorption coefficients of particulate (algal and non-algal components) and dissolved substances are modelled and combined with the pure seawater component to determine the total light absorption coefficients of seawater in the Bay of Bengal. Two parameters namely chlorophyll-a (Chl) concentration and turbidity were measured using commercially available instruments with high sampling rates. For modelling the light absorption coefficients of oceanic waters, the measured data are classified into two broad groups - algal dominant and non-algal particle (NAP) dominant. With these criteria the individual absorption coefficients of phytoplankton and NAP were established based on their concentrations using an iterative method. To account for the spectral dependence of absorption by phytoplankton, the wavelength-dependent coefficients were introduced into the model. The CDOM absorption was determined by subtracting the individual absorption coefficients of phytoplankton and NAP from the measured total absorption data and then related to the Chl concentration. Validity of the model is assessed based on independent in-situ data from certain discrete locations in the Bay of Bengal. The total absorption coefficients estimated using the new model by considering the contributions of algal, non-algal and CDOM have good agreement with the measured total absorption data with the error range of 6.9 to 28.3%. Results obtained by the present model are important for predicting the propagation of the radiant energy within the ocean and interpreting remote sensing observation data.

2. Modeling and experiments for the time-dependent diffusion coefficient during methane desorption from coal

Science.gov (United States)

Cheng-Wu, Li; Hong-Lai, Xue; Cheng, Guan; Wen-biao, Liu

2018-04-01

Statistical analysis shows that in the coal matrix, the diffusion coefficient for methane is time-varying, and its integral satisfies the formula μt κ /(1 + β κ ). Therefore, a so-called dynamic diffusion coefficient model (DDC model) is developed. To verify the suitability and accuracy of the DDC model, a series of gas diffusion experiments were conducted using coal particles of different sizes. The results show that the experimental data can be accurately described by the DDC and bidisperse models, but the fit to the DDC model is slightly better. For all coal samples, as time increases, the effective diffusion coefficient first shows a sudden drop, followed by a gradual decrease before stabilizing at longer times. The effective diffusion coefficient has a negative relationship with the size of the coal particle. Finally, the relationship between the constants of the DDC model and the effective diffusion coefficient is discussed. The constant α (μ/R 2 ) denotes the effective coefficient at the initial time, and the constants κ and β control the attenuation characteristic of the effective diffusion coefficient.

3. Monte Carlo tests of the Rasch model based on scalability coefficients

DEFF Research Database (Denmark)

Christensen, Karl Bang; Kreiner, Svend

2010-01-01

that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...

4. Transport coefficients from SU(3) Polyakov linear-σ model

International Nuclear Information System (INIS)

Tawfik, A.; Diab, A.

2015-01-01

In the mean field approximation, the grand potential of SU(3) Polyakov linear-σ model (PLSM) is analyzed for the order parameter of the light and strange chiral phase-transitions, σ l and σ s , respectively, and for the deconfinement order parameters φ and φ*. Furthermore, the subtracted condensate Δ l,s and the chiral order-parameters M b are compared with lattice QCD calculations. By using the dynamical quasiparticle model (DQPM), which can be considered as a system of noninteracting massive quasiparticles, we have evaluated the decay width and the relaxation time of quarks and gluons. In the framework of LSM and with Polyakov loop corrections included, the interaction measure Δ/T 4 , the specific heat c v and speed of sound squared c s 2 have been determined, as well as the temperature dependence of the normalized quark number density n q /T 3 and the quark number susceptibilities χ q /T 2 at various values of the baryon chemical potential. The electric and heat conductivity, σ e and κ, and the bulk and shear viscosities normalized to the thermal entropy, ζ/s and η/s, are compared with available results of lattice QCD calculations.

5. Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.

Science.gov (United States)

Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong

2018-08-01

Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.

6. A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process

NARCIS (Netherlands)

C.M. Hafner (Christian); M.J. McAleer (Michael)

2014-01-01

markdownabstract__Abstract__ One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of

7. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

Science.gov (United States)

Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

2012-01-01

A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

8. Random-growth urban model with geographical fitness

Science.gov (United States)

Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

2012-12-01

This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

9. Evolution of the concentration PDF in random environments modeled by global random walk

Science.gov (United States)

Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

2013-04-01

The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and

10. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

Directory of Open Access Journals (Sweden)

Yunbei Ma

2014-01-01

Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

11. Random matrix model for disordered conductors

In the interpretation of transport properties of mesoscopic systems, the multichannel ... One defines the random matrix model with N eigenvalues 0. λТ ..... With heuristic arguments, using the ideas pertaining to Dyson Coulomb gas analogy,.

12. The random walk model of intrafraction movement

International Nuclear Information System (INIS)

Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

2013-01-01

The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)

13. The random walk model of intrafraction movement.

Science.gov (United States)

Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

2013-04-07

The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.

14. A Generalized Random Regret Minimization Model

NARCIS (Netherlands)

Chorus, C.G.

2013-01-01

This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

15. Computer simulations of the random barrier model

DEFF Research Database (Denmark)

Schrøder, Thomas; Dyre, Jeppe

2002-01-01

A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

16. The development from kinetic coefficients of a predictive model for the growth of Eichhomia crassipes in the field. I. Generating kinetic coefficients for the model in greenhouse culture

Directory of Open Access Journals (Sweden)

C. F. Musil

1984-12-01

Full Text Available The kinetics of N- and P- limited growth of Eichhornia crassipes (Mart . Solms were investigated in greenhouse culture with the object of developing a model for predicting population sizes, yields, growth rates and frequencies and amounts of harvest, under varying conditions of nutrient loading and climate, to control both nutrient inputs and excessive growth in eutrophied aquatic systems. The kinetic coefficients, maximum specific growth rate (Umax, half saturation coefficient (Ks and yield coefficient (Yc were measured under N and P limitation in replicated batch culture experiments. Umax values and Ks concentrations derived under N limitation ranged from 5,37 to 8,86% d + and from 400 to 1 506 µg  N ℓ1respectively. Those derived under P limitation ranged from 4,51 to 10,89% d 1 and from 41 to 162 fig P ℓ1 respectively. Yc values (fresh mass basis determined ranged from 1 660 to 1 981 (87 to 98 dry mass basis for N and from 16 431 to 18 671 (867 to 980 dry mass basis for P. The reciprocals of Yc values (dry mass basis, expressed as percentages, adequately estimated the minimum limiting concentrations of N and P {% dry mass in the plant tissues. Kinetic coefficients determined are compared with those reported for algae. The experimental method used and results obtained are critically assessed.

17. New limb-darkening coefficients for modeling binary star light curves

Science.gov (United States)

Van Hamme, W.

1993-01-01

We present monochromatic, passband-specific, and bolometric limb-darkening coefficients for a linear as well as nonlinear logarithmic and square root limb-darkening laws. These coefficients, including the bolometric ones, are needed when modeling binary star light curves with the latest version of the Wilson-Devinney light curve progam. We base our calculations on the most recent ATLAS stellar atmosphere models for solar chemical composition stars with a wide range of effective temperatures and surface gravitites. We examine how well various limb-darkening approximations represent the variation of the emerging specific intensity across a stellar surface as computed according to the model. For binary star light curve modeling purposes, we propose the use of a logarithmic or a square root law. We design our tables in such a manner that the relative quality of either law with respect to another can be easily compared. Since the computation of bolometric limb-darkening coefficients first requires monochromatic coefficients, we also offer tables of these coefficients (at 1221 wavelength values between 9.09 nm and 160 micrometer) and tables of passband-specific coefficients for commonly used photometric filters.

18. Extracting surface diffusion coefficients from batch adsorption measurement data: application of the classic Langmuir kinetics model.

Science.gov (United States)

Chu, Khim Hoong

2017-11-09

Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6  cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.

19. Measurement and modelling of mean activity coefficients of aqueous mixed electrolyte solution containing glycine

Energy Technology Data Exchange (ETDEWEB)

Dehghani, M.R. [Department of Chemical Engineering, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of) ; Modarress, H. [Department of Chemical Engineering, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of) ]. E-mail: hmodares@aut.ac.ir; Monirfar, M. [Department of Chemical Engineering, Amirkabir University of Technology, Tehran (Iran, Islamic Republic of)

2006-08-15

Electrochemical measurements were made on (H{sub 2}O + NaBr + K{sub 3}PO{sub 4} + glycine) mixtures at T = 298.15 K by using ion selective electrodes. The mean ionic activity coefficients of NaBr at molality 0.1 were determined at five K{sub 3}PO{sub 4} molalities (0.01, 0.03, 0.05, 0.07, and 0.1) mol . kg{sup -1}. The activity coefficients of glycine were evaluated from mean ionic activity coefficients of NaBr. The modified Pitzer equation was used to model the experimental data.

20. Robust Control for the Segway with Unknown Control Coefficient and Model Uncertainties

Directory of Open Access Journals (Sweden)

Byung Woo Kim

2016-06-01

Full Text Available The Segway, which is a popular vehicle nowadays, is an uncertain nonlinear system and has an unknown time-varying control coefficient. Thus, we should consider the unknown time-varying control coefficient and model uncertainties to design the controller. Motivated by this observation, we propose a robust control for the Segway with unknown control coefficient and model uncertainties. To deal with the time-varying unknown control coefficient, we employ the Nussbaum gain technique. We introduce an auxiliary variable to solve the underactuated problem. Due to the prescribed performance control technique, the proposed controller does not require the adaptive technique, neural network, and fuzzy logic to compensate the uncertainties. Therefore, it can be simple. From the Lyapunov stability theory, we prove that all signals in the closed-loop system are bounded. Finally, we provide the simulation results to demonstrate the effectiveness of the proposed control scheme.

1. Friction coefficient and limiter load test analysis by flexibility coefficient model of Hold-Down Spring of nuclear reactor vessel internals

Energy Technology Data Exchange (ETDEWEB)

Xie, Linjun [Zhejiang Univ. of Technology, Hangzhou (China). College of Mechanical Engineering; Xue, Guohong; Zhang, Ming [Shanghai Nuclear Engineering Research and Design Institute, Shanghai (China)

2017-11-15

The friction force between the contact surfaces of a reactor internal hold-down spring (HDS) and core barrel flanges can directly influence the axial stiffness of an HDS. However, friction coefficient cannot be obtained through theoretical analysis. This study performs a mathematical deduction of the physical model of an HDS. Moreover, a mathematical model of axial load P, displacement δ, and flexibility coefficient is established, and a set of test apparatuses is designed to simulate the preloading process of the HDS. According to the experimental research and theoretical analysis, P-δ curves and the flexibility coefficient λ are obtained in the loading processes of the HDS. The friction coefficient f of the M1000 HDS is further calculated as 0.224. The displacement limit load value (4,638 kN) can be obtained through a displacement limit experiment. With the friction coefficient considered, the theoretical load is 4,271 kN, which is relatively close to the experimental result. Thus, the friction coefficient exerts an influence on the displacement limit load P. The friction coefficient should be considered in the design analysis for HDS.

2. Friction coefficient and limiter load test analysis by flexibility coefficient model of Hold-Down Spring of nuclear reactor vessel internals

International Nuclear Information System (INIS)

Xie, Linjun

2017-01-01

The friction force between the contact surfaces of a reactor internal hold-down spring (HDS) and core barrel flanges can directly influence the axial stiffness of an HDS. However, friction coefficient cannot be obtained through theoretical analysis. This study performs a mathematical deduction of the physical model of an HDS. Moreover, a mathematical model of axial load P, displacement δ, and flexibility coefficient is established, and a set of test apparatuses is designed to simulate the preloading process of the HDS. According to the experimental research and theoretical analysis, P-δ curves and the flexibility coefficient λ are obtained in the loading processes of the HDS. The friction coefficient f of the M1000 HDS is further calculated as 0.224. The displacement limit load value (4,638 kN) can be obtained through a displacement limit experiment. With the friction coefficient considered, the theoretical load is 4,271 kN, which is relatively close to the experimental result. Thus, the friction coefficient exerts an influence on the displacement limit load P. The friction coefficient should be considered in the design analysis for HDS.

3. RMBNToolbox: random models for biochemical networks

Directory of Open Access Journals (Sweden)

Niemi Jari

2007-05-01

Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

4. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

Energy Technology Data Exchange (ETDEWEB)

Yang, S.H.; Son, J.E.; Lee, S.D.; Cho, S.I.; Ashtiani-Araghi, A.; Rhee, J.Y.

2016-11-01

If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE), which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE. (Author)

5. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

Directory of Open Access Journals (Sweden)

Seung-Hwan Yang

2016-03-01

Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.

6. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

Directory of Open Access Journals (Sweden)

Yunquan Song

2013-01-01

Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

7. Variable-coefficient higher-order nonlinear Schroedinger model in optical fibers: Variable-coefficient bilinear form, Baecklund transformation, brightons and symbolic computation

International Nuclear Information System (INIS)

Tian Bo; Gao Yitian; Zhu Hongwu

2007-01-01

Symbolically investigated in this Letter is a variable-coefficient higher-order nonlinear Schroedinger (vcHNLS) model for ultrafast signal-routing, fiber laser systems and optical communication systems with distributed dispersion and nonlinearity management. Of physical and optical interests, with bilinear method extend, the vcHNLS model is transformed into a variable-coefficient bilinear form, and then an auto-Baecklund transformation is constructed. Constraints on coefficient functions are analyzed. Potentially observable with future optical-fiber experiments, variable-coefficient brightons are illustrated. Relevant properties and features are discussed as well. Baecklund transformation and other results of this Letter will be of certain value to the studies on inhomogeneous fiber media, core of dispersion-managed brightons, fiber amplifiers, laser systems and optical communication links with distributed dispersion and nonlinearity management

8. A stochastic model for density-dependent microwave Snow- and Graupel scattering coefficients of the NOAA JCSDA community radiative transfer model

Science.gov (United States)

Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.

2018-05-01

A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.

9. Transfer coefficients to terrestrial food products in equilibrium assessment models for nuclear installations

International Nuclear Information System (INIS)

Zach, R.

1980-09-01

Transfer coefficients have become virtually indispensible in the study of the fate of radioisotopes released from nuclear installations. These coefficients are used in equilibrium assessment models where they specify the degree of transfer in food chains of individual radioisotopes from soil to plant products and from feed or forage and drinking water to animal products and ultimately to man. Information on transfer coefficients for terrestrial food chain models is very piecemeal and occurs in a wide variety of journals and reports. To enable us to choose or determine suitable values for assessments, we have addressed the following aspects of transfer coefficients on a very broad scale: (1) definitions, (2) equilibrium assumption, which stipulates that transfer coefficients be restricted to equilibrium or steady rate conditions, (3) assumption of linearity, that is the idea that radioisotope concentrations in food products increase linearly with contamination levels in the soil or animal feed, (4) methods of determination, (5) variability, (6) generic versus site-specific values, (7) statistical aspects, (8) use, (9) sources of currently used values, (10) criteria for revising values, (11) establishment and maintenance of files on transfer coefficients, and (12) future developments. (auth)

10. Spatially varying coefficient models in real estate: Eigenvector spatial filtering and alternative approaches

NARCIS (Netherlands)

Helbich, M; Griffith, D

2016-01-01

Real estate policies in urban areas require the recognition of spatial heterogeneity in housing prices to account for local settings. In response to the growing number of spatially varying coefficient models in housing applications, this study evaluated four models in terms of their spatial patterns

11. Predictive QSPR Modelling for the Second Virial Coefficient of the Pure Organic Compounds.

Science.gov (United States)

Mokshyna, E; Polishchuk, P G; Nedostup, V I; Kuzmin, V E

2015-01-01

In this article we developed a system of the predictive models for the second virial coefficients of the pure compounds. Second virial coefficient is the property derived from the virial equation of state, and is of particular interest as it describes pair intermolecular interactions. The two-layer QSPR models were developed, which exploited the well-known physical equations and allowed us to include this information into traditional QSPR methodology. This shows some new perspectives for work with temperature-dependent properties. It was shown that 2D descriptors can be successfully used for modeling of complex thermodynamic properties like virial coefficients. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

12. Data assimilation within the Advanced Circulation (ADCIRC) modeling framework for the estimation of Manning's friction coefficient

KAUST Repository

Mayo, Talea

2014-04-01

Coastal ocean models play a major role in forecasting coastal inundation due to extreme events such as hurricanes and tsunamis. Additionally, they are used to model tides and currents under more moderate conditions. The models numerically solve the shallow water equations, which describe conservation of mass and momentum for processes with large horizontal length scales relative to the vertical length scales. The bottom stress terms that arise in the momentum equations can be defined through the Manning\\'s n formulation, utilizing the Manning\\'s n coefficient. The Manning\\'s n coefficient is an empirically derived, spatially varying parameter, and depends on many factors such as the bottom surface roughness. It is critical to the accuracy of coastal ocean models, however, the coefficient is often unknown or highly uncertain. In this work we reformulate a statistical data assimilation method generally used in the estimation of model state variables to estimate this model parameter. We show that low-dimensional representations of Manning\\'s n coefficients can be recovered by assimilating water elevation data. This is a promising approach to parameter estimation in coastal ocean modeling. © 2014 Elsevier Ltd.

13. Data assimilation within the Advanced Circulation (ADCIRC) modeling framework for the estimation of Manning's friction coefficient

KAUST Repository

Mayo, Talea; Butler, Troy; Dawson, Clint N.; Hoteit, Ibrahim

2014-01-01

Coastal ocean models play a major role in forecasting coastal inundation due to extreme events such as hurricanes and tsunamis. Additionally, they are used to model tides and currents under more moderate conditions. The models numerically solve the shallow water equations, which describe conservation of mass and momentum for processes with large horizontal length scales relative to the vertical length scales. The bottom stress terms that arise in the momentum equations can be defined through the Manning's n formulation, utilizing the Manning's n coefficient. The Manning's n coefficient is an empirically derived, spatially varying parameter, and depends on many factors such as the bottom surface roughness. It is critical to the accuracy of coastal ocean models, however, the coefficient is often unknown or highly uncertain. In this work we reformulate a statistical data assimilation method generally used in the estimation of model state variables to estimate this model parameter. We show that low-dimensional representations of Manning's n coefficients can be recovered by assimilating water elevation data. This is a promising approach to parameter estimation in coastal ocean modeling. © 2014 Elsevier Ltd.

14. Recent developments in biokinetic models and the calculation of internal dose coefficients

International Nuclear Information System (INIS)

Fell, T.P.; Phipps, A.W.; Kendall, G.M.; Stradling, G.N.

1997-01-01

In most cases the measurement of radioactivity in an environmental or biological sample will be followed by some estimation of dose and possibly risk, either to a population or an individual. This will normally involve the use of a dose coefficient (dose per unit intake value) taken from a compendium. In recent years the calculation of dose coefficients has seen many developments in both biokinetic modelling and computational capabilities. ICRP has recommended new models for the respiratory tract and for the systemic behavior of many of the more important elements. As well as this, a general age-dependent calculation method has been developed which involves an effectively continuous variation of both biokinetic and dosimetric parameters, facilitating more realistic estimation of doses to young people. These new developments were used in work for recent ICRP, IAEA and CEC compendia of dose coefficients for both members of the public (including children) and workers. This paper presents a general overview of the method of calculation of internal doses with particular reference to the actinides. Some of the implications for dose coefficients of the new models are discussed. For example it is shown that compared with data in ICRP Publications 30 and 54: the new respiratory tract model generally predicts lower deposition in systemic tissues per unit intake; the new biokinetic models for actinides allow for burial of material deposited on bone surfaces; age-dependent models generally feature faster turnover of material in young people. All of these factors can lead to substantially different estimates of dose and examples of the new dose coefficients are given to illustrate these differences. During the development of the new models for actinides, human bioassay data were used to validate the model. Thus, one would expect the new models to give reasonable predictions of bioassay quantities. Some examples of the bioassay applications, e.g., excretion data for the

15. Compartment modelling in nuclear medicine: a new program for the determination of transfer coefficients

International Nuclear Information System (INIS)

1986-01-01

In many investigations concerning transport/exchange of matter in a natural system, e.g. functional studies in nuclear medicine, it is advantageous to relate experimental results to a model of the system. A new computer program is presented for the determination of linear transfer coefficients in a compartment model from experimentally observed time-compartment content curves. The program performs a least-square fit with the specified precision of the observed values as weight factors. The resulting uncertainty in the calculated transfer coefficients may also be assessed. The application of the program in nuclear medicine is demonstrated and discussed. (author)

16. Influence on dose coefficients for workers of the new metabolic models

International Nuclear Information System (INIS)

1998-01-01

The International Commission on Radiological Protection (ICRP) has recently reviewed the biokinetic models used in the internal contamination dose assessment. ICRP has adopted a new model for the human respiratory tract and has updated, in ICRP Publications 56, 67 and 69, some of the biokinetic models of ICRP Publication 30. In this paper, the dose coefficients for some selected radionuclides issued in ICRP Publication 68 are compared with those obtained using the software LUPED (LUng Dose Evaluation Program). The former were calculated using the new systemic models, while the latter are based on the old metabolic models. The aim is to know to what extent the new models for systematic retention influence the dose coefficients for workers. (author) [es

17. QSPR modeling of octanol/water partition coefficient of antineoplastic agents by balance of correlations.

Science.gov (United States)

Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio

2010-04-01

Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.

18. Hybrid Modeling of Intra-DCT Coefficients for Real-Time Video Encoding

Directory of Open Access Journals (Sweden)

Li Jin

2008-01-01

Full Text Available Abstract The two-dimensional discrete cosine transform (2-D DCT and its subsequent quantization are widely used in standard video encoders. However, since most DCT coefficients become zeros after quantization, a number of redundant computations are performed. This paper proposes a hybrid statistical model used to predict the zeroquantized DCT (ZQDCT coefficients for intratransform and to achieve better real-time performance. First, each pixel block at the input of DCT is decomposed into a series of mean values and a residual block. Subsequently, a statistical model based on Gaussian distribution is used to predict the ZQDCT coefficients of the residual block. Then, a sufficient condition under which each quantized coefficient becomes zero is derived from the mean values. Finally, a hybrid model to speed up the DCT and quantization calculations is proposed. Experimental results show that the proposed model can reduce more redundant computations and achieve better real-time performance than the reference in the literature at the cost of negligible video quality degradation. Experiments also show that the proposed model significantly reduces multiplications for DCT and quantization. This is particularly suitable for processors in portable devices where multiplications consume more power than additions. Computational reduction implies longer battery lifetime and energy economy.

19. Simulating intrafraction prostate motion with a random walk model.

Science.gov (United States)

Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O

2017-01-01

Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during

20. A Semiparametric Time Trend Varying Coefficients Model: With An Application to Evaluate Credit Rationing in U.S. Credit Market

OpenAIRE

Jingping Gu; Paula Hernandez-Verme

2009-01-01

In this paper, we propose a new semiparametric varying coefficient model which extends the existing semi-parametric varying coefficient models to allow for a time trend regressor with smooth coefficient function. We propose to use the local linear method to estimate the coefficient functions and we provide the asymptotic theory to describe the asymptotic distribution of the local linear estimator. We present an application to evaluate credit rationing in the U.S. credit market. Using U.S. mon...

1. A Semiparametric Time Trend Varying Coefficients Model: With An Application to Evaluate Credit Rationing in U.S. Credit Market

OpenAIRE

Qi Gao; Jingping Gu; Paula Hernandez-Verme

2012-01-01

In this paper, we propose a new semiparametric varying coefficient model which extends the existing semi-parametric varying coefficient models to allow for a time trend regressor with smooth coefficient function. We propose to use the local linear method to estimate the coefficient functions and we provide the asymptotic theory to describe the asymptotic distribution of the local linear estimator. We present an application to evaluate credit rationing in the U.S. credit market. Using U.S. mon...

2. Dynamic modeling of the horizontal eddy viscosity coefficient for quasigeostrophic ocean circulation problems

Directory of Open Access Journals (Sweden)

Romit Maulik

2016-12-01

Full Text Available This paper puts forth a simplified dynamic modeling strategy for the eddy viscosity coefficient parameterized in space and time. The eddy viscosity coefficient is dynamically adjusted to the local structure of the flow using two different nonlinear eddy viscosity functional forms to capture anisotropic dissipation mechanism, namely, (i the Smagorinsky model using the local strain rate field, and (ii the Leith model using the gradient of the vorticity field. The proposed models are applied to the one-layer and two-layer wind-driven quasigeostrophic ocean circulation problems, which are standard prototypes of more realistic ocean dynamics. Results show that both models capture the quasi-stationary ocean dynamics and provide the physical level of eddy viscosity distribution without using any a priori estimation. However, it is found that slightly less dissipative results can be obtained by using the dynamic Leith model. Two-layer numerical experiments also reveal that the proposed dynamic models automatically parameterize the subgrid-scale stress terms in each active layer. Furthermore, the proposed scale-aware models dynamically provide higher values of the eddy viscosity for smaller resolutions taking into account the local resolved flow information, and addressing the intimate relationship between the eddy viscosity coefficients and the numerical resolution employed by the quasigeostrophic models.

3. Method of model reduction and multifidelity models for solute transport in random layered porous media

Energy Technology Data Exchange (ETDEWEB)

Xu, Zhijie; Tartakovsky, Alexandre M.

2017-09-01

This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.

4. Random effect selection in generalised linear models

DEFF Research Database (Denmark)

Denwood, Matt; Houe, Hans; Forkman, Björn

We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

5. Development of a New Drag Coefficient Model for Oil and Gas ...

African Journals Online (AJOL)

6. Modelling water evaporation during frying with an evaporation dependent heat transfer coefficient

NARCIS (Netherlands)

Koerten, van K.N.; Somsen, D.; Boom, R.M.; Schutyser, M.A.I.

2017-01-01

In this study a cylindrical crust-core frying model was developed including an evaporation rate dependent heat transfer coefficient. For this, we applied a Nusselt relation for cylindrical bodies and view the release of vapour bubbles during the frying process as a reversed fluidised bed. The

7. Time-varying coefficient estimation in SURE models. Application to portfolio management

DEFF Research Database (Denmark)

Casas, Isabel; Ferreira, Eva; Orbe, Susan

This paper provides a detailed analysis of the asymptotic properties of a kernel estimator for a Seemingly Unrelated Regression Equations model with time-varying coefficients (tv-SURE) under very general conditions. Theoretical results together with a simulation study differentiates the cases...

8. Practical methods to define scattering coefficients in a room acoustics computer model

DEFF Research Database (Denmark)

Zeng, Xiangyang; Christensen, Claus Lynge; Rindel, Jens Holger

2006-01-01

of obtaining the data becomes quite time consuming thus increasing the cost of design. In this paper, practical methods to define scattering coefficients, which is based on an approach of modeling surface scattering and scattering caused by limited size of surface as well as edge diffraction are presented...

9. Development of a New Drag Coefficient Model for Oil and Gas ...

African Journals Online (AJOL)

Development of a New Drag Coefficient Model for Oil and Gas Multiphase Fluid Systems. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... suspensions of solid particles are frequently encountered in many industrial processes including oil & gas production. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

10. Intraclass Correlation Coefficients in Hierarchical Designs: Evaluation Using Latent Variable Modeling

Science.gov (United States)

Raykov, Tenko

2011-01-01

Interval estimation of intraclass correlation coefficients in hierarchical designs is discussed within a latent variable modeling framework. A method accomplishing this aim is outlined, which is applicable in two-level studies where participants (or generally lower-order units) are clustered within higher-order units. The procedure can also be…

11. Pedestrian Walking Behavior Revealed through a Random Walk Model

Directory of Open Access Journals (Sweden)

Hui Xiong

2012-01-01

Full Text Available This paper applies method of continuous-time random walks for pedestrian flow simulation. In the model, pedestrians can walk forward or backward and turn left or right if there is no block. Velocities of pedestrian flow moving forward or diffusing are dominated by coefficients. The waiting time preceding each jump is assumed to follow an exponential distribution. To solve the model, a second-order two-dimensional partial differential equation, a high-order compact scheme with the alternating direction implicit method, is employed. In the numerical experiments, the walking domain of the first one is two-dimensional with two entrances and one exit, and that of the second one is two-dimensional with one entrance and one exit. The flows in both scenarios are one way. Numerical results show that the model can be used for pedestrian flow simulation.

12. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

Science.gov (United States)

Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

2015-09-01

Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

13. Evaluation of Systematic and Random Error in the Measurement of Equilibrium Solubility and Diffusion Coefficient for Liquids in Polymers

National Research Council Canada - National Science Library

Shuely, Wendel

2001-01-01

A standardized thermogravimetric analyzer (TGA) desorption method for measuring the equilibrium solubility and diffusion coefficient of toxic contaminants with polymers was further developed and evaluated...

14. A methodology for the parametric modelling of the flow coefficients and flow rate in hydraulic valves

International Nuclear Information System (INIS)

Valdés, José R.; Rodríguez, José M.; Saumell, Javier; Pütz, Thomas

2014-01-01

Highlights: • We develop a methodology for the parametric modelling of flow in hydraulic valves. • We characterize the flow coefficients with a generic function with two parameters. • The parameters are derived from CFD simulations of the generic geometry. • We apply the methodology to two cases from the automotive brake industry. • We validate by comparing with CFD results varying the original dimensions. - Abstract: The main objective of this work is to develop a methodology for the parametric modelling of the flow rate in hydraulic valve systems. This methodology is based on the derivation, from CFD simulations, of the flow coefficient of the critical restrictions as a function of the Reynolds number, using a generalized square root function with two parameters. The methodology is then demonstrated by applying it to two completely different hydraulic systems: a brake master cylinder and an ABS valve. This type of parametric valve models facilitates their implementation in dynamic simulation models of complex hydraulic systems

15. Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising

Directory of Open Access Journals (Sweden)

Chien-ming Chou

2014-01-01

Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.

16. A random walk model to evaluate autism

Science.gov (United States)

Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

2018-02-01

A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

17. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

Science.gov (United States)

Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

2016-04-01

Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

18. Influence of inhomogeneous surface heat capacity on the estimation of radiative response coefficients in a two-zone energy balance model

Science.gov (United States)

Park, Jungmin; Choi, Yong-Sang

2018-04-01

Observationally constrained values of the global radiative response coefficient are pivotal to assess the reliability of modeled climate feedbacks. A widely used approach is to measure transient global radiative imbalance related to surface temperature changes. However, in this approach, a potential error in the estimate of radiative response coefficients may arise from surface inhomogeneity in the climate system. We examined this issue theoretically using a simple two-zone energy balance model. Here, we dealt with the potential error by subtracting the prescribed radiative response coefficient from those calculated within the two-zone framework. Each zone was characterized by the different magnitude of the radiative response coefficient and the surface heat capacity, and the dynamical heat transport in the atmosphere between the zones was parameterized as a linear function of the temperature difference between the zones. Then, the model system was forced by randomly generated monthly varying forcing mimicking time-varying forcing like an observation. The repeated simulations showed that inhomogeneous surface heat capacity causes considerable miscalculation (down to -1.4 W m-2 K-1 equivalent to 31.3% of the prescribed value) in the global radiative response coefficient. Also, the dynamical heat transport reduced this miscalculation driven by inhomogeneity of surface heat capacity. Therefore, the estimation of radiative response coefficients using the surface temperature-radiation relation is appropriate for homogeneous surface areas least affected by the exterior.

19. QSPR modeling of octanol/water partition coefficient for vitamins by optimal descriptors calculated with SMILES.

Science.gov (United States)

Toropov, A A; Toropova, A P; Raska, I

2008-04-01

Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).

20. Determination of Partition Coefficients of Selected Model Migrants between Polyethylene and Polypropylene and Nanocomposite Polypropylene

Directory of Open Access Journals (Sweden)

Pablo Otero-Pazos

2016-01-01

Full Text Available Studies on nanoparticles have focused the attention of the researchers because they can produce nanocomposites that exhibit unexpected hybrid properties. Polymeric materials are commonly used in food packaging, but from the standpoint of food safety, one of the main concerns on the use of these materials is the potential migration of low molecular substances from the packaging into the food. The key parameters of this phenomenon are the diffusion and partition coefficients. Studies on migration from food packaging with nanomaterials are very scarce. This study is focused on the determination of partition coefficients of different model migrants between the low-density polyethylene (LDPE and polypropylene (PP and between LDPE and nanocomposite polypropylene (naPP. The results show that the incorporation of nanoparticles in polypropylene increases the mass transport of model migrants from LDPE to naPP. This quantity of migrants absorbed into PP and naPP depends partially on the nature of the polymer and slightly on the chemical features of the migrant. Relation (RPP/naPP between partition coefficient KLDPE/PP and partition coefficient KLDPE/naPP at 60°C and 80°C shows that only BHT at 60°C has a RPP/naPP less than 1. On the other hand, bisphenol A has the highest RPP/naPP with approximately 50 times more.

1. A numerical model for boiling heat transfer coefficient of zeotropic mixtures

Science.gov (United States)

Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo

2017-12-01

Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.

2. Enhancement of heat transfer coefficient multi-metallic nanofluid with ANFIS modeling for thermophysical properties

Directory of Open Access Journals (Sweden)

Balla Hyder H.

2015-01-01

Full Text Available Cu and Zn-water nanofluid is a suspension of the Cu and Zn nanoparticles with the size 50 nm in the water base fluid for different volume fractions to enhance its Thermophysical properties. The determination and measuring the enhancement of Thermophysical properties depends on many limitations. Nanoparticles were suspended in a base fluid to prepare a nanofluid. A coated transient hot wire apparatus was calibrated after the building of the all systems. The vibro-viscometer was used to measure the dynamic viscosity. The measured dynamic viscosity and thermal conductivity with all parameters affected on the measurements such as base fluids thermal conductivity, volume factions, and the temperatures of the base fluid were used as input to the Artificial Neural Fuzzy inference system to modeling both dynamic viscosity and thermal conductivity of the nanofluids. Then, the ANFIS modeling equations were used to calculate the enhancement in heat transfer coefficient using CFD software. The heat transfer coefficient was determined for flowing flow in a circular pipe at constant heat flux. It was found that the thermal conductivity of the nanofluid was highly affected by the volume fraction of nanoparticles. A comparison of the thermal conductivity ratio for different volume fractions was undertaken. The heat transfer coefficient of nanofluid was found to be higher than its base fluid. Comparisons of convective heat transfer coefficients for Cu and Zn nanofluids with the other correlation for the nanofluids heat transfer enhancement are presented. Moreover, the flow demonstrates anomalous enhancement in heat transfer nanofluids.

3. Correlation and prediction of osmotic coefficient and water activity of aqueous electrolyte solutions by a two-ionic parameter model

International Nuclear Information System (INIS)

Pazuki, G.R.

2005-01-01

In this study, osmotic coefficients and water activities in aqueous solutions have been modeled using a new approach based on the Pitzer model. This model contains two physically significant ionic parameters regarding ionic solvation and the closest distance of approach between ions in a solution. The proposed model was evaluated by estimating the osmotic coefficients of nine electrolytes in aqueous solutions. The obtained results showed that the model is suitable for predicting the osmotic coefficients in aqueous electrolyte solutions. Using adjustable parameters, which have been calculated from regression between the experimental osmotic coefficient and the results of this model, the water activity coefficients of aqueous solutions were calculated. The average absolute relative deviations of the osmotic coefficients between the experimental data and the calculated results were in agreement

4. Spatial modeling of HIV and HSV-2 among women in Kenya with spatially varying coefficients

Directory of Open Access Journals (Sweden)

Elphas Okango

2016-04-01

Full Text Available Abstract Background Disease mapping has become popular in the field of statistics as a method to explain the spatial distribution of disease outcomes and as a tool to help design targeted intervention strategies. Most of these models however have been implemented with assumptions that may be limiting or altogether lead to less meaningful results and hence interpretations. Some of these assumptions include the linearity, stationarity and normality assumptions. Studies have shown that the linearity assumption is not necessarily true for all covariates. Age for example has been found to have a non-linear relationship with HIV and HSV-2 prevalence. Other studies have made stationarity assumption in that one stimulus e.g. education, provokes the same response in all the regions under study and this is also quite restrictive. Responses to stimuli may vary from region to region due to aspects like culture, preferences and attitudes. Methods We perform a spatial modeling of HIV and HSV-2 among women in Kenya, while relaxing these assumptions i.e. the linearity assumption by allowing the covariate age to have a non-linear effect on HIV and HSV-2 prevalence using the random walk model of order 2 and the stationarity assumption by allowing the rest of the covariates to vary spatially using the conditional autoregressive model. The women data used in this study were derived from the 2007 Kenya AIDS indicator survey where women aged 15–49 years were surveyed. A full Bayesian approach was used and the models were implemented in R-INLA software. Results Age was found to have a non-linear relationship with both HIV and HSV-2 prevalence, and the spatially varying coefficient model provided a significantly better fit for HSV-2. Age-at first sex also had a greater effect on HSV-2 prevalence in the Coastal and some parts of North Eastern regions suggesting either early marriages or child prostitution. The effect of education on HIV prevalence among women was more

5. Influence of Boussinesq coefficient on depth-averaged modelling of rapid flows

Science.gov (United States)

Yang, Fan; Liang, Dongfang; Xiao, Yang

2018-04-01

6. Random matrix models for phase diagrams

International Nuclear Information System (INIS)

Vanderheyden, B; Jackson, A D

2011-01-01

We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied.

7. Low degree Earth's gravity coefficients determined from different space geodetic observations and climate models

Science.gov (United States)

Wińska, Małgorzata; Nastula, Jolanta

2017-04-01

Large scale mass redistribution and its transport within the Earth system causes changes in the Earth's rotation in space, gravity field and Earth's ellipsoid shape. These changes are observed in the ΔC21, ΔS21, and ΔC20 spherical harmonics gravity coefficients, which are proportional to the mass load-induced Earth rotational excitations. In this study, linear trend, decadal, inter-annual, and seasonal variations of low degree spherical harmonics coefficients of Earth's gravity field, determined from different space geodetic techniques, Gravity Recovery and Climate Experiment (GRACE), satellite laser ranging (SLR), Global Navigation Satellite System (GNSS), Earth rotation, and climate models, are examined. In this way, the contribution of each measurement technique to interpreting the low degree surface mass density of the Earth is shown. Especially, we evaluate an usefulness of several climate models from the Coupled Model Intercomparison Project phase 5 (CMIP5) to determine the low degree Earth's gravity coefficients using GRACE satellite observations. To do that, Terrestrial Water Storage (TWS) changes from several CMIP5 climate models are determined and then these simulated data are compared with the GRACE observations. Spherical harmonics ΔC21, ΔS21, and ΔC20 changes are calculated as the sum of atmosphere and ocean mass effect (GAC values) taken from GRACE and a land surface hydrological estimate from the selected CMIP5 climate models. Low degree Stokes coefficients of the surface mass density determined from GRACE, SLR, GNSS, Earth rotation measurements and climate models are compared to each other in order to assess their consistency. The comparison is done by using different types of statistical and signal processing methods.

8. Determination and importance of temperature dependence of retention coefficient (RPHPLC) in QSAR model of nitrazepams' partition coefficient in bile acid micelles.

Science.gov (United States)

Posa, Mihalj; Pilipović, Ana; Lalić, Mladena; Popović, Jovan

2011-02-15

Linear dependence between temperature (t) and retention coefficient (k, reversed phase HPLC) of bile acids is obtained. Parameters (a, intercept and b, slope) of the linear function k=f(t) highly correlate with bile acids' structures. Investigated bile acids form linear congeneric groups on a principal component (calculated from k=f(t)) score plot that are in accordance with conformations of the hydroxyl and oxo groups in a bile acid steroid skeleton. Partition coefficient (K(p)) of nitrazepam in bile acids' micelles is investigated. Nitrazepam molecules incorporated in micelles show modified bioavailability (depo effect, higher permeability, etc.). Using multiple linear regression method QSAR models of nitrazepams' partition coefficient, K(p) are derived on the temperatures of 25°C and 37°C. For deriving linear regression models on both temperatures experimentally obtained lipophilicity parameters are included (PC1 from data k=f(t)) and in silico descriptors of the shape of a molecule while on the higher temperature molecular polarisation is introduced. This indicates the fact that the incorporation mechanism of nitrazepam in BA micelles changes on the higher temperatures. QSAR models are derived using partial least squares method as well. Experimental parameters k=f(t) are shown to be significant predictive variables. Both QSAR models are validated using cross validation and internal validation method. PLS models have slightly higher predictive capability than MLR models. Copyright © 2010 Elsevier B.V. All rights reserved.

9. A numerical evaluation of prediction accuracy of CO2 absorber model for various reaction rate coefficients

Directory of Open Access Journals (Sweden)

Shim S.M.

2012-01-01

Full Text Available The performance of the CO2 absorber column using mono-ethanolamine (MEA solution as chemical solvent are predicted by a One-Dimensional (1-D rate based model in the present study. 1-D Mass and heat balance equations of vapor and liquid phase are coupled with interfacial mass transfer model and vapor-liquid equilibrium model. The two-film theory is used to estimate the mass transfer between the vapor and liquid film. Chemical reactions in MEA-CO2-H2O system are considered to predict the equilibrium pressure of CO2 in the MEA solution. The mathematical and reaction kinetics models used in this work are calculated by using in-house code. The numerical results are validated in the comparison of simulation results with experimental and simulation data given in the literature. The performance of CO2 absorber column is evaluated by the 1-D rate based model using various reaction rate coefficients suggested by various researchers. When the rate of liquid to gas mass flow rate is about 8.3, 6.6, 4.5 and 3.1, the error of CO2 loading and the CO2 removal efficiency using the reaction rate coefficients of Aboudheir et al. is within about 4.9 % and 5.2 %, respectively. Therefore, the reaction rate coefficient suggested by Aboudheir et al. among the various reaction rate coefficients used in this study is appropriate to predict the performance of CO2 absorber column using MEA solution. [Acknowledgement. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF, funded by the Ministry of Education, Science and Technology (2011-0017220].

10. Desorption modeling of hydrophobic organic chemicals from plastic sheets using experimentally determined diffusion coefficients in plastics.

Science.gov (United States)

Lee, Hwang; Byun, Da-Eun; Kim, Ju Min; Kwon, Jung-Hwan

2018-01-01

To evaluate rate of migration from plastic debris, desorption of model hydrophobic organic chemicals (HOCs) from polyethylene (PE)/polypropylene (PP) films to water was measured using PE/PP films homogeneously loaded with the HOCs. The HOCs fractions remaining in the PE/PP films were compared with those predicted using a model characterized by the mass transfer Biot number. The experimental data agreed with the model simulation, indicating that HOCs desorption from plastic particles can generally be described by the model. For hexachlorocyclohexanes with lower plastic-water partition coefficients, desorption was dominated by diffusion in the plastic film, whereas desorption of chlorinated benzenes with higher partition coefficients was determined by diffusion in the aqueous boundary layer. Evaluation of the fraction of HOCs remaining in plastic films with respect to film thickness and desorption time showed that the partition coefficient between plastic and water is the most important parameter influencing the desorption half-life. Copyright © 2017 Elsevier Ltd. All rights reserved.

11. Statistical Models for Sediment/Detritus and Dissolved Absorption Coefficients in Coastal Waters of the Northern Gulf of Mexico

National Research Council Canada - National Science Library

Green, Rebecca E; Gould, Jr., Richard W; Ko, Dong S

2008-01-01

... (CDOM) absorption coefficients from physical hydrographic and atmospheric properties. The models were developed for northern Gulf of Mexico shelf waters using multi-year satellite and physical data...

12. Proportional hazards model with varying coefficients for length-biased data.

Science.gov (United States)

Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

2014-01-01

Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

13. Analytical Modeling Of The Steinmetz Coefficient For Single-Phase Transformer Eddy Current Loss Prediction

Directory of Open Access Journals (Sweden)

T. Aly Saandy

2015-08-01

Full Text Available Abstract This article presents to an analytical calculation methodology of the Steinmetz coefficient applied to the prediction of Eddy current loss in a single-phase transformer. Based on the electrical circuit theory the active power consumed by the core is expressed analytically in function of the electrical parameters as resistivity and the geometrical dimensions of the core. The proposed modeling approach is established with the duality parallel series. The required coefficient is identified from the empirical Steinmetz data based on the experimented active power expression. To verify the relevance of the model validations both by simulations with two in two different frequencies and measurements were carried out. The obtained results are in good agreement with the theoretical approach and the practical results.

14. Varying Coefficient Panel Data Model in the Presence of Endogenous Selectivity and Fixed Effects

OpenAIRE

Malikov, Emir; Kumbhakar, Subal C.; Sun, Yiguo

2013-01-01

This paper considers a flexible panel data sample selection model in which (i) the outcome equation is permitted to take a semiparametric, varying coefficient form to capture potential parameter heterogeneity in the relationship of interest, (ii) both the outcome and (parametric) selection equations contain unobserved fixed effects and (iii) selection is generalized to a polychotomous case. We propose a two-stage estimator. Given consistent parameter estimates from the selection equation obta...

15. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

Czech Academy of Sciences Publication Activity Database

Pozníková, Gabriela; Fischer, Milan; Pohanková, Eva; Trnka, Miroslav

2014-01-01

Roč. 62, č. 5 (2014), s. 1079-1086 ISSN 1211-8516 R&D Projects: GA MŠk LH12037; GA MŠk(CZ) EE2.3.20.0248 Institutional support: RVO:67179843 Keywords : evapotranspiration * dual crop coefficient model * Bowen ratio/energy balance method * transpiration * soil evaporation * spring barley Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7)

16. Lower Virial Coefficients of Primitive Models of Polar and Associating Fluids.

Czech Academy of Sciences Publication Activity Database

Rouha, M.; Nezbeda, Ivo

2007-01-01

Roč. 134, 1-3 (2007) , s. 107-110 Sp. Iss. SI ISSN 0167-7322 R&D Projects: GA AV ČR(CZ) IAA4072303; GA AV ČR(CZ) 1ET400720409 Institutional research plan: CEZ:AV0Z40720504 Keywords : primitive models * virial coefficients * metropolis like monte-carlo integration Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.982, year: 2007

17. New proposal of moderator temperature coefficient estimation method using gray-box model in NPP, (1)

International Nuclear Information System (INIS)

Mori, Michitsugu; Kagami, Yuichi; Kanemoto, Shigeru; Enomoto, Mitsuhiro; Tamaoki, Tetsuo; Kawamura, Shinichiro

2004-01-01

The purpose of the present paper is to establish a new void reactivity coefficient (VRC) estimation method based on gray box modeling concept. The gray box model consists of a point kinetics model as the first principle model and a fitting model of moderator temperature kinetics. Applying Kalman filter and maximum likehood estimation algorithms to the gray box model, MTC can be estimated. The verification test is done by Monte Carlo simulation, and, it is shown that the present method gives the best estimation results comparing with the conventional methods from the viewpoints of non-biased and smallest scattering estimation performance. Furthermore, the method is verified via real plant data analysis. The reason of good performance of the present method is explained by proper definition of likelihood function based on explicit expression of observation and system noise in the gray box model. (author)

18. REE Partition Coefficients from Synthetic Diogenite-Like Enstatite and the Implications of Petrogenetic Modeling

Science.gov (United States)

Schwandt, C. S.; McKay, G. A.

1996-01-01

Determining the petrogenesis of eucrites (basaltic achondrites) and diogenites (orthopyroxenites) and the possible links between the meteorite types was initiated 30 years ago by Mason. Since then, most investigators have worked on this question. A few contrasting theories have emerged, with the important distinction being whether or not there is a direct genetic link between eucrites and diogenites. One theory suggests that diogenites are cumulates resulting from the fractional crystallization of a parent magma with the eucrites crystallizing, from the residual magma after separation from the diogenite cumulates. Another model proposes that diogenites are cumulates formed from partial melts derived from a source region depleted by the prior generation of eucrite melts. It has also been proposed that the diogenites may not be directly linked to the eucrites and that they are cumulates derived from melts that are more orthopyroxene normative than the eucrites. This last theory has recently received more analytical and experimental support. One of the difficulties with petrogenetic modeling is that it requires appropriate partition coefficients for modeling because they are dependent on temperature, pressure, and composition. For this reason, we set out to determine minor- and trace-element partition coefficients for diogenite-like orthopyroxene. We have accomplished this task and now have enstatite/melt partition coefficients for Al, Cr, Ti, La, Ce, Nd, Sm, Eu, Dy, Er, Yb, and La.

19. Particle filters for random set models

CERN Document Server

Ristic, Branko

2013-01-01

“Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

20. GRACE gravity field modeling with an investigation on correlation between nuisance parameters and gravity field coefficients

Science.gov (United States)

Zhao, Qile; Guo, Jing; Hu, Zhigang; Shi, Chuang; Liu, Jingnan; Cai, Hua; Liu, Xianglin

2011-05-01

The GRACE (Gravity Recovery And Climate Experiment) monthly gravity models have been independently produced and published by several research institutions, such as Center for Space Research (CSR), GeoForschungsZentrum (GFZ), Jet Propulsion Laboratory (JPL), Centre National d’Etudes Spatiales (CNES) and Delft Institute of Earth Observation and Space Systems (DEOS). According to their processing standards, above institutions use the traditional variational approach except that the DEOS exploits the acceleration approach. The background force models employed are rather similar. The produced gravity field models generally agree with one another in the spatial pattern. However, there are some discrepancies in the gravity signal amplitude between solutions produced by different institutions. In particular, 10%-30% signal amplitude differences in some river basins can be observed. In this paper, we implemented a variant of the traditional variational approach and computed two sets of monthly gravity field solutions using the data from January 2005 to December 2006. The input data are K-band range-rates (KBRR) and kinematic orbits of GRACE satellites. The main difference in the production of our two types of models is how to deal with nuisance parameters. This type of parameters is necessary to absorb low-frequency errors in the data, which are mainly the aliasing and instrument errors. One way is to remove the nuisance parameters before estimating the geopotential coefficients, called NPARB approach in the paper. The other way is to estimate the nuisance parameters and geopotential coefficients simultaneously, called NPESS approach. These two types of solutions mainly differ in geopotential coefficients from degree 2 to 5. This can be explained by the fact that the nuisance parameters and the gravity field coefficients are highly correlated, particularly at low degrees. We compare these solutions with the official and published ones by means of spectral analysis. It is

1. SDSS-II: Determination of shape and color parameter coefficients for SALT-II fit model

Energy Technology Data Exchange (ETDEWEB)

Dojcsak, L.; Marriner, J.; /Fermilab

2010-08-01

In this study we look at the SALT-II model of Type IA supernova analysis, which determines the distance moduli based on the known absolute standard candle magnitude of the Type IA supernovae. We take a look at the determination of the shape and color parameter coefficients, {alpha} and {beta} respectively, in the SALT-II model with the intrinsic error that is determined from the data. Using the SNANA software package provided for the analysis of Type IA supernovae, we use a standard Monte Carlo simulation to generate data with known parameters to use as a tool for analyzing the trends in the model based on certain assumptions about the intrinsic error. In order to find the best standard candle model, we try to minimize the residuals on the Hubble diagram by calculating the correct shape and color parameter coefficients. We can estimate the magnitude of the intrinsic errors required to obtain results with {chi}{sup 2}/degree of freedom = 1. We can use the simulation to estimate the amount of color smearing as indicated by the data for our model. We find that the color smearing model works as a general estimate of the color smearing, and that we are able to use the RMS distribution in the variables as one method of estimating the correct intrinsic errors needed by the data to obtain the correct results for {alpha} and {beta}. We then apply the resultant intrinsic error matrix to the real data and show our results.

2. Development of a model to calculate the overall heat transfer coefficient of greenhouse covers

Energy Technology Data Exchange (ETDEWEB)

Rasheed, A.; Lee, J. W.; Lee, H.L.

2017-07-01

A Building Energy Simulation (BES) model based on TRNSYS, was developed to investigate the overall heat transfer coefficient (U-value) of greenhouse covers including polyethylene (PE), polycarbonate (PC), polyvinyl chloride (PVC), and horticultural glass (HG). This was used to determine the influences of inside-to-outside temperature difference, wind speed, and night sky radiation on the U-values of these materials. The model was calibrated using published values of the inside and outside convective heat transfer coefficients. Validation of the model was demonstrated by the agreement between the computed and experimental results for a single-layer PE film. The results from the BES model showed significant changes in U-value in response to variations in weather parameters and the use of single or double layer greenhouse covers. It was found that the U-value of PC, PVC, and HG was 9%, 4%, and 15% lower, respectively, than that for PE. In addition, by using double glazing a 34% reduction in heat loss was noted. For the given temperature U-value increases as wind speed increases. The slopes at the temperature differences of 20, 30, 40, and 50 °C, were approximately 0.3, 0.5, 0.7, and 0.9, respectively. The results agree with those put forward by other researchers. Hence, the presented model is reliable and can play a valuable role in future work on greenhouse energy modelling.

3. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

Energy Technology Data Exchange (ETDEWEB)

Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)

2014-11-15

In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

4. Modeling of the Interminiband Absorption Coefficient in InGaN Quantum Dot Superlattices

Directory of Open Access Journals (Sweden)

Giovanni Giannoccaro

2016-01-01

Full Text Available In this paper, a model to estimate minibands and theinterminiband absorption coefficient for a wurtzite (WZ indium gallium nitride (InGaN self-assembled quantum dot superlattice (QDSL is developed. It considers a simplified cuboid shape for quantum dots (QDs. The semi-analytical investigation starts from evaluation through the three-dimensional (3D finite element method (FEM simulations of crystal mechanical deformation derived from heterostructure lattice mismatch under spontaneous and piezoelectric polarization effects. From these results, mean values in QDs and barrier regions of charge carriers’ electric potentials and effective masses for the conduction band (CB and three valence sub-bands for each direction are evaluated. For the minibands’ investigation, the single-particle time-independent Schrödinger equation in effective mass approximation is decoupled in three directions and resolved using the one-dimensional (1D Kronig–Penney model. The built-in electric field is also considered along the polar axis direction, obtaining Wannier–Stark ladders. Then, theinterminiband absorption coefficient in thermal equilibrium for transverse electric (TE and magnetic (TM incident light polarization is calculated using Fermi’s golden rule implementation based on a numerical integration into the first Brillouin zone. For more detailed results, an absorption coefficient component related to superlattice free excitons is also introduced. Finally, some simulation results, observations and comments are given.

5. Non-constant link tension coefficient in the tumbling-snake model subjected to simple shear

Science.gov (United States)

Stephanou, Pavlos S.; Kröger, Martin

2017-11-01

The authors of the present study have recently presented evidence that the tumbling-snake model for polymeric systems has the necessary capacity to predict the appearance of pronounced undershoots in the time-dependent shear viscosity as well as an absence of equally pronounced undershoots in the transient two normal stress coefficients. The undershoots were found to appear due to the tumbling behavior of the director u when a rotational Brownian diffusion term is considered within the equation of motion of polymer segments, and a theoretical basis concerning the use of a link tension coefficient given through the nematic order parameter had been provided. The current work elaborates on the quantitative predictions of the tumbling-snake model to demonstrate its capacity to predict undershoots in the time-dependent shear viscosity. These predictions are shown to compare favorably with experimental rheological data for both polymer melts and solutions, help us to clarify the microscopic origin of the observed phenomena, and demonstrate in detail why a constant link tension coefficient has to be abandoned.

6. Asymptotic properties of Pearson's rank-variate correlation coefficient under contaminated Gaussian model.

Science.gov (United States)

Ma, Rubao; Xu, Weichao; Zhang, Yun; Ye, Zhongfu

2014-01-01

This paper investigates the robustness properties of Pearson's rank-variate correlation coefficient (PRVCC) in scenarios where one channel is corrupted by impulsive noise and the other is impulsive noise-free. As shown in our previous work, these scenarios that frequently encountered in radar and/or sonar, can be well emulated by a particular bivariate contaminated Gaussian model (CGM). Under this CGM, we establish the asymptotic closed forms of the expectation and variance of PRVCC by means of the well known Delta method. To gain a deeper understanding, we also compare PRVCC with two other classical correlation coefficients, i.e., Spearman's rho (SR) and Kendall's tau (KT), in terms of the root mean squared error (RMSE). Monte Carlo simulations not only verify our theoretical findings, but also reveal the advantage of PRVCC by an example of estimating the time delay in the particular impulsive noise environment.

7. A fast collocation method for a variable-coefficient nonlocal diffusion model

Science.gov (United States)

Wang, Che; Wang, Hong

2017-02-01

We develop a fast collocation scheme for a variable-coefficient nonlocal diffusion model, for which a numerical discretization would yield a dense stiffness matrix. The development of the fast method is achieved by carefully handling the variable coefficients appearing inside the singular integral operator and exploiting the structure of the dense stiffness matrix. The resulting fast method reduces the computational work from O (N3) required by a commonly used direct solver to O (Nlog ⁡ N) per iteration and the memory requirement from O (N2) to O (N). Furthermore, the fast method reduces the computational work of assembling the stiffness matrix from O (N2) to O (N). Numerical results are presented to show the utility of the fast method.

8. The Effect of Nonzero Autocorrelation Coefficients on the Distributions of Durbin-Watson Test Estimator: Three Autoregressive Models

Directory of Open Access Journals (Sweden)

Mei-Yu LEE

2014-11-01

Full Text Available This paper investigates the effect of the nonzero autocorrelation coefficients on the sampling distributions of the Durbin-Watson test estimator in three time-series models that have different variance-covariance matrix assumption, separately. We show that the expected values and variances of the Durbin-Watson test estimator are slightly different, but the skewed and kurtosis coefficients are considerably different among three models. The shapes of four coefficients are similar between the Durbin-Watson model and our benchmark model, but are not the same with the autoregressive model cut by one-lagged period. Second, the large sample case shows that the three models have the same expected values, however, the autoregressive model cut by one-lagged period explores different shapes of variance, skewed and kurtosis coefficients from the other two models. This implies that the large samples lead to the same expected values, 2(1 – ρ0, whatever the variance-covariance matrix of the errors is assumed. Finally, comparing with the two sample cases, the shape of each coefficient is almost the same, moreover, the autocorrelation coefficients are negatively related with expected values, are inverted-U related with variances, are cubic related with skewed coefficients, and are U related with kurtosis coefficients.

9. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

Directory of Open Access Journals (Sweden)

R. P. Gerber

2013-03-01

Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

10. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

Science.gov (United States)

Williams, C.J.; Heglund, P.J.

2009-01-01

Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

11. Connectivity ranking of heterogeneous random conductivity models

Science.gov (United States)

Rizzo, C. B.; de Barros, F.

2017-12-01

To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

12. Modeling superhydrophobic surfaces comprised of random roughness

Science.gov (United States)

Samaha, M. A.; Tafreshi, H. Vahedi; Gad-El-Hak, M.

2011-11-01

We model the performance of superhydrophobic surfaces comprised of randomly distributed roughness that resembles natural surfaces, or those produced via random deposition of hydrophobic particles. Such a fabrication method is far less expensive than ordered-microstructured fabrication. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridge configurations for pipe flows. The present results are compared with other theoretical and experimental studies. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. Financial support from DARPA, contract number W91CRB-10-1-0003, is acknowledged.

13. A random matrix model of relaxation

International Nuclear Information System (INIS)

Lebowitz, J L; Pastur, L

2004-01-01

We consider a two-level system, S 2 , coupled to a general n level system, S n , via a random matrix. We derive an integral representation for the mean reduced density matrix ρ(t) of S 2 in the limit n → ∞, and we identify a model of S n which possesses some of the properties expected for macroscopic thermal reservoirs. In particular, it yields the Gibbs form for ρ(∞). We also consider an analog of the van Hove limit and obtain a master equation (Markov dynamics) for the evolution of ρ(t) on an appropriate time scale

14. Prediction of octanol-air partition coefficients for polychlorinated biphenyls (PCBs) using 3D-QSAR models.

Science.gov (United States)

Chen, Ying; Cai, Xiaoyu; Jiang, Long; Li, Yu

2016-02-01

Based on the experimental data of octanol-air partition coefficients (KOA) for 19 polychlorinated biphenyl (PCB) congeners, two types of QSAR methods, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA), are used to establish 3D-QSAR models using the structural parameters as independent variables and using logKOA values as the dependent variable with the Sybyl software to predict the KOA values of the remaining 190 PCB congeners. The whole data set (19 compounds) was divided into a training set (15 compounds) for model generation and a test set (4 compounds) for model validation. As a result, the cross-validation correlation coefficient (q(2)) obtained by the CoMFA and CoMSIA models (shuffled 12 times) was in the range of 0.825-0.969 (>0.5), the correlation coefficient (r(2)) obtained was in the range of 0.957-1.000 (>0.9), and the SEP (standard error of prediction) of test set was within the range of 0.070-0.617, indicating that the models were robust and predictive. Randomly selected from a set of models, CoMFA analysis revealed that the corresponding percentages of the variance explained by steric and electrostatic fields were 23.9% and 76.1%, respectively, while CoMSIA analysis by steric, electrostatic and hydrophobic fields were 0.6%, 92.6%, and 6.8%, respectively. The electrostatic field was determined as a primary factor governing the logKOA. The correlation analysis of the relationship between the number of Cl atoms and the average logKOA values of PCBs indicated that logKOA values gradually increased as the number of Cl atoms increased. Simultaneously, related studies on PCB detection in the Arctic and Antarctic areas revealed that higher logKOA values indicate a stronger PCB migration ability. From CoMFA and CoMSIA contour maps, logKOA decreased when substituents possessed electropositive groups at the 2-, 3-, 3'-, 5- and 6- positions, which could reduce the PCB migration ability. These results are

15. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

DEFF Research Database (Denmark)

Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...

16. A study on improvement of analytical prediction model for spacer grid pressure loss coefficients

International Nuclear Information System (INIS)

Lim, Jonh Seon

2002-02-01

Nuclear fuel assemblies used in the nuclear power plants consist of the nuclear fuel rods, the control rod guide tubes, an instrument guide tube, spacer grids,a bottom nozzle, a top nozzle. The spacer grid is the most important component of the fuel assembly components for thermal hydraulic and mechanical design and analyses. The spacer grids fixed with the guide tubes support the fuel rods and have the very important role to activate thermal energy transfer by the coolant mixing caused to the turbulent flow and crossflow in the subchannels. In this paper, the analytical spacer grid pressure loss prediction model has been studied and improved by considering the test section wall to spacer grid gap pressure loss independently and applying the appropriate friction drag coefficient to predict pressure loss more accurately at the low Reynolds number region. The improved analytical model has been verified based on the hydraulic pressure drop test results for the spacer grids of three types with 5x5, 16x16, 17x17 arrays, respectively. The pressure loss coefficients predicted by the improved analytical model are coincident with those test results within ±12%. This result shows that the improved analytical model can be used for research and design change of the nuclear fuel assembly

17. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

Science.gov (United States)

Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

2017-04-01

Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

18. A parabolic model of drag coefficient for storm surge simulation in the South China Sea

Science.gov (United States)

Peng, Shiqiu; Li, Yineng

2015-01-01

Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models. PMID:26499262

19. A parabolic model of drag coefficient for storm surge simulation in the South China Sea

Science.gov (United States)

Peng, Shiqiu; Li, Yineng

2015-10-01

Drag coefficient (Cd) is an essential metric in the calculation of momentum exchange over the air-sea interface and thus has large impacts on the simulation or forecast of the upper ocean state associated with sea surface winds such as storm surges. Generally, Cd is a function of wind speed. However, the exact relationship between Cd and wind speed is still in dispute, and the widely-used formula that is a linear function of wind speed in an ocean model could lead to large bias at high wind speed. Here we establish a parabolic model of Cd based on storm surge observations and simulation in the South China Sea (SCS) through a number of tropical cyclone cases. Simulation of storm surges for independent Tropical cyclones (TCs) cases indicates that the new parabolic model of Cd outperforms traditional linear models.

20. Application of several activity coefficient models to water-organic-electrolyte aerosols of atmospheric interest

Directory of Open Access Journals (Sweden)

T. Raatikainen

2005-01-01

Full Text Available In this work, existing and modified activity coefficient models are examined in order to assess their capabilities to describe the properties of aqueous solution droplets relevant in the atmosphere. Five different water-organic-electrolyte activity coefficient models were first selected from the literature. Only one of these models included organics and electrolytes which are common in atmospheric aerosol particles. In the other models, organic species were solvents such as alcohols, and important atmospheric ions like NH4+ could be missing. The predictions of these models were compared to experimental activity and solubility data in aqueous single electrolyte solutions with 31 different electrolytes. Based on the deviations from experimental data and on the capabilities of the models, four predictive models were selected for fitting of new parameters for binary and ternary solutions of common atmospheric electrolytes and organics. New electrolytes (H+, NH4+, Na+, Cl-, NO3- and SO42- and organics (dicarboxylic and some hydroxy acids were added and some modifications were made to the models if it was found useful. All new and most of the existing parameters were fitted to experimental single electrolyte data as well as data for aqueous organics and aqueous organic-electrolyte solutions. Unfortunately, there are very few data available for organic activities in binary solutions and for organic and electrolyte activities in aqueous organic-electrolyte solutions. This reduces model capabilities in predicting solubilities. After the parameters were fitted, deviations from measurement data were calculated for all fitted models, and for different data types. These deviations and the calculated property values were compared with those from other non-electrolyte and organic-electrolyte models found in the literature. Finally, hygroscopic growth factors were calculated for four 100 nm organic-electrolyte particles and these predictions were compared to

1. Polynomial Chaos Expansion of Random Coefficients and the Solution of Stochastic Partial Differential Equations in the Tensor Train Format

KAUST Repository

Dolgov, Sergey; Khoromskij, Boris N.; Litvinenko, Alexander; Matthies, Hermann G.

2015-01-01

We apply the tensor train (TT) decomposition to construct the tensor product polynomial chaos expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some

2. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

Science.gov (United States)

Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

2015-01-01

Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

3. Ising model of a randomly triangulated random surface as a definition of fermionic string theory

International Nuclear Information System (INIS)

1986-01-01

Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)

4. Heat and mass transfer coefficients and modeling of infrared drying of banana slices

Directory of Open Access Journals (Sweden)

Full Text Available ABSTRACT Banana is one of the most consumed fruits in the world, having a large part of its production performed in tropical countries. This product possesses a wide range of vitamins and minerals, being an important component of the alimentation worldwide. However, the shelf life of bananas is short, thus requiring procedures to prevent the quality loss and increase the shelf life. One of these procedures widely used is drying. This work aimed to study the infrared drying process of banana slices (cv. Prata and determine the heat and mass transfer coefficients of this process. In addition, effective diffusion coefficient and relationship between ripening stages of banana and drying were obtained. Banana slices at four different ripening stages were dried using a dryer with infrared heating source with four different temperatures (65, 75, 85, and 95 ºC. Midilli model was the one that best represented infrared drying of banana slices. Heat and mass transfer coefficients varied, respectively, between 46.84 and 70.54 W m-2 K-1 and 0.040 to 0.0632 m s-1 for temperature range, at the different ripening stages. Effective diffusion coefficient ranged from 1.96 to 3.59 × 10-15 m² s-1. Activation energy encountered were 16.392, 29.531, 23.194, and 25.206 kJ mol-1 for 2nd, 3rd, 5th, and 7th ripening stages, respectively. Ripening stages did not affect the infrared drying of bananas.

5. A new neural network model for solving random interval linear programming problems.

Science.gov (United States)

2017-05-01

This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

6. Modeling of thermal expansion coefficient of perovskite oxide for solid oxide fuel cell cathode

Science.gov (United States)

Heydari, F.; Maghsoudipour, A.; Alizadeh, M.; Khakpour, Z.; Javaheri, M.

2015-09-01

Artificial intelligence models have the capacity to eliminate the need for expensive experimental investigation in various areas of manufacturing processes, including the material science. This study investigates the applicability of adaptive neuro-fuzzy inference system (ANFIS) approach for modeling the performance parameters of thermal expansion coefficient (TEC) of perovskite oxide for solid oxide fuel cell cathode. Oxides (Ln = La, Nd, Sm and M = Fe, Ni, Mn) have been prepared and characterized to study the influence of the different cations on TEC. Experimental results have shown TEC decreases favorably with substitution of Nd3+ and Mn3+ ions in the lattice. Structural parameters of compounds have been determined by X-ray diffraction, and field emission scanning electron microscopy has been used for the morphological study. Comparison results indicated that the ANFIS technique could be employed successfully in modeling thermal expansion coefficient of perovskite oxide for solid oxide fuel cell cathode, and considerable savings in terms of cost and time could be obtained by using ANFIS technique.

7. Mutual diffusion coefficient models for polymer-solvent systems based on the Chapman-Enskog theory

Directory of Open Access Journals (Sweden)

R. A. Reis

2004-12-01

Full Text Available There are numerous examples of the importance of small molecule migration in polymeric materials, such as in drying polymeric packing, controlled drug delivery, formation of films, and membrane separation, etc. The Chapman-Enskog kinetic theory of hard-sphere fluids with the Weeks-Chandler-Andersen effective hard-sphere diameter (Enskog-WCA has been the most fruitful in diffusion studies of simple fluids and mixtures. In this work, the ability of the Enskog-WCA model to describe the temperature and concentration dependence of the mutual diffusion coefficient, D, for a polystyrene-toluene system was evaluated. Using experimental diffusion data, two polymer model approaches and three mixing rules for the effective hard-sphere diameter were tested. Some procedures tested resulted in models that are capable of correlating the experimental data with the refereed system well for a solvent mass fraction greater than 0.3.

8. Random defect lines in conformal minimal models

International Nuclear Information System (INIS)

Jeng, M.; Ludwig, A.W.W.

2001-01-01

We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder

9. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

International Nuclear Information System (INIS)

Singh, Satbir; You, Donghyun

2013-01-01

Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

10. A kinetic model for chemical reactions without barriers: transport coefficients and eigenmodes

International Nuclear Information System (INIS)

Alves, Giselle M; Kremer, Gilberto M; Marques, Wilson Jr; Soares, Ana Jacinta

2011-01-01

The kinetic model of the Boltzmann equation proposed in the work of Kremer and Soares 2009 for a binary mixture undergoing chemical reactions of symmetric type which occur without activation energy is revisited here, with the aim of investigating in detail the transport properties of the reactive mixture and the influence of the reaction process on the transport coefficients. Accordingly, the non-equilibrium solutions of the Boltzmann equations are determined through an expansion in Sonine polynomials up to the first order, using the Chapman–Enskog method, in a chemical regime for which the reaction process is close to its final equilibrium state. The non-equilibrium deviations are explicitly calculated for what concerns the thermal–diffusion ratio and coefficients of shear viscosity, diffusion and thermal conductivity. The theoretical and formal analysis developed in the present paper is complemented with some numerical simulations performed for different concentrations of reactants and products of the reaction as well as for both exothermic and endothermic chemical processes. The results reveal that chemical reactions without energy barrier can induce an appreciable influence on the transport properties of the mixture. Oppositely to the case of reactions with activation energy, the coefficients of shear viscosity and thermal conductivity become larger than those of an inert mixture when the reactions are exothermic. An application of the non-barrier model and its detailed transport picture are included in this paper, in order to investigate the dynamics of the local perturbations on the constituent number densities, and velocity and temperature of the whole mixture, induced by spontaneous internal fluctuations. It is shown that for the longitudinal disturbances there exist two hydrodynamic sound modes, one purely diffusive hydrodynamic mode and one kinetic mode

11. A kinetic model for chemical reactions without barriers: transport coefficients and eigenmodes

Science.gov (United States)

Alves, Giselle M.; Kremer, Gilberto M.; Marques, Wilson, Jr.; Jacinta Soares, Ana

2011-03-01

The kinetic model of the Boltzmann equation proposed in the work of Kremer and Soares 2009 for a binary mixture undergoing chemical reactions of symmetric type which occur without activation energy is revisited here, with the aim of investigating in detail the transport properties of the reactive mixture and the influence of the reaction process on the transport coefficients. Accordingly, the non-equilibrium solutions of the Boltzmann equations are determined through an expansion in Sonine polynomials up to the first order, using the Chapman-Enskog method, in a chemical regime for which the reaction process is close to its final equilibrium state. The non-equilibrium deviations are explicitly calculated for what concerns the thermal-diffusion ratio and coefficients of shear viscosity, diffusion and thermal conductivity. The theoretical and formal analysis developed in the present paper is complemented with some numerical simulations performed for different concentrations of reactants and products of the reaction as well as for both exothermic and endothermic chemical processes. The results reveal that chemical reactions without energy barrier can induce an appreciable influence on the transport properties of the mixture. Oppositely to the case of reactions with activation energy, the coefficients of shear viscosity and thermal conductivity become larger than those of an inert mixture when the reactions are exothermic. An application of the non-barrier model and its detailed transport picture are included in this paper, in order to investigate the dynamics of the local perturbations on the constituent number densities, and velocity and temperature of the whole mixture, induced by spontaneous internal fluctuations. It is shown that for the longitudinal disturbances there exist two hydrodynamic sound modes, one purely diffusive hydrodynamic mode and one kinetic mode.

12. Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models

Science.gov (United States)

Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Paulson, Erik K.; Samei, Ehsan

2013-12-01

Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDIvol-normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2-180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2-80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57-180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDIvol-normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDIvol/DLP values may

13. Dose coefficients in pediatric and adult abdominopelvic CT based on 100 patient models

International Nuclear Information System (INIS)

Tian, Xiaoyu; Samei, Ehsan; Li, Xiang; Segars, W Paul; Frush, Donald P; Paulson, Erik K

2013-01-01

Recent studies have shown the feasibility of estimating patient dose from a CT exam using CTDI vol -normalized-organ dose (denoted as h), DLP-normalized-effective dose (denoted as k), and DLP-normalized-risk index (denoted as q). However, previous studies were limited to a small number of phantom models. The purpose of this work was to provide dose coefficients (h, k, and q) across a large number of computational models covering a broad range of patient anatomy, age, size percentile, and gender. The study consisted of 100 patient computer models (age range, 0 to 78 y.o.; weight range, 2–180 kg) including 42 pediatric models (age range, 0 to 16 y.o.; weight range, 2–80 kg) and 58 adult models (age range, 18 to 78 y.o.; weight range, 57–180 kg). Multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare) were included. A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which h, k, and q were derived. The relationships between h, k, and q and patient characteristics (size, age, and gender) were ascertained. The differences in conversion coefficients across the scanners were further characterized. CTDI vol -normalized-organ dose (h) showed an exponential decrease with increasing patient size. For organs within the image coverage, the average differences of h across scanners were less than 15%. That value increased to 29% for organs on the periphery or outside the image coverage, and to 8% for distributed organs, respectively. The DLP-normalized-effective dose (k) decreased exponentially with increasing patient size. For a given gender, the DLP-normalized-risk index (q) showed an exponential decrease with both increasing patient size and patient age. The average differences in k and q across scanners were 8% and 10%, respectively. This study demonstrated that the knowledge of patient information and CTDI vol

14. Modelling the stochastic nature of the available coefficient of friction at footwear-floor interfaces.

Science.gov (United States)

Gragg, Jared; Klose, Ellison; Yang, James

2017-07-01

The available coefficient of friction (ACOF) is a measure of the friction available between two surfaces, which for human gait would be the footwear-floor interface. It is often compared to the required coefficient of friction (RCOF) to determine the likelihood of a slip in gait. Both the ACOF and RCOF are stochastic by nature meaning that neither should be represented by a deterministic value, such as the sample mean. Previous research has determined that the RCOF can be modelled well by either the normal or lognormal distributions, but previous research aimed at determining an appropriate distribution for the ACOF was inconclusive. This study focuses on modelling the stochastic nature of the ACOF by fitting eight continuous probability distributions to ACOF data for six scenarios. In addition, the data were used to study the effect that a simple housekeeping action such as sweeping could have on the ACOF. Practitioner Summary: Previous research aimed at determining an appropriate distribution for the ACOF was inconclusive. The study addresses this issue as well as looking at the effect that an act such as sweeping has on the ACOF.

15. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

Energy Technology Data Exchange (ETDEWEB)

Gong, R [Georgia Institute of Technology; Lu, C [Georgia Institute of Technology; Luo, Jian [Georgia Institute of Technology; Wu, Wei-min [Stanford University; Cheng, H. [Stanford University; Criddle, Craig [Stanford University; Kitanidis, Peter K. [Stanford University; Gu, Baohua [ORNL; Watson, David B [ORNL; Jardine, Philip M [ORNL; Brooks, Scott C [ORNL

2011-03-01

A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.

16. Estimates of Intraclass Correlation Coefficients from Longitudinal Group-Randomized Trials of Adolescent HIV/STI/Pregnancy Prevention Programs

Science.gov (United States)

Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.

2015-01-01

Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…

17. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

Science.gov (United States)

Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

2013-01-01

When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

18. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

Science.gov (United States)

Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

2017-09-01

The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

19. Predictive multiscale computational model of shoe-floor coefficient of friction.

Science.gov (United States)

Moghaddam, Seyed Reza M; Acharya, Arjun; Redfern, Mark S; Beschorner, Kurt E

2018-01-03

Understanding the frictional interactions between the shoe and floor during walking is critical to prevention of slips and falls, particularly when contaminants are present. A multiscale finite element model of shoe-floor-contaminant friction was developed that takes into account the surface and material characteristics of the shoe and flooring in microscopic and macroscopic scales. The model calculates shoe-floor coefficient of friction (COF) in boundary lubrication regime where effects of adhesion friction and hydrodynamic pressures are negligible. The validity of model outputs was assessed by comparing model predictions to the experimental results from mechanical COF testing. The multiscale model estimates were linearly related to the experimental results (p < 0.0001). The model predicted 73% of variability in experimentally-measured shoe-floor-contaminant COF. The results demonstrate the potential of multiscale finite element modeling in aiding slip-resistant shoe and flooring design and reducing slip and fall injuries. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

20. Autoregressive spatially varying coefficients model for predicting daily PM2.5 using VIIRS satellite AOT

Science.gov (United States)

Schliep, E. M.; Gelfand, A. E.; Holland, D. M.

2015-12-01

There is considerable demand for accurate air quality information in human health analyses. The sparsity of ground monitoring stations across the United States motivates the need for advanced statistical models to predict air quality metrics, such as PM2.5, at unobserved sites. Remote sensing technologies have the potential to expand our knowledge of PM2.5 spatial patterns beyond what we can predict from current PM2.5 monitoring networks. Data from satellites have an additional advantage in not requiring extensive emission inventories necessary for most atmospheric models that have been used in earlier data fusion models for air pollution. Statistical models combining monitoring station data with satellite-obtained aerosol optical thickness (AOT), also referred to as aerosol optical depth (AOD), have been proposed in the literature with varying levels of success in predicting PM2.5. The benefit of using AOT is that satellites provide complete gridded spatial coverage. However, the challenges involved with using it in fusion models are (1) the correlation between the two data sources varies both in time and in space, (2) the data sources are temporally and spatially misaligned, and (3) there is extensive missingness in the monitoring data and also in the satellite data due to cloud cover. We propose a hierarchical autoregressive spatially varying coefficients model to jointly model the two data sources, which addresses the foregoing challenges. Additionally, we offer formal model comparison for competing models in terms of model fit and out of sample prediction of PM2.5. The models are applied to daily observations of PM2.5 and AOT in the summer months of 2013 across the conterminous United States. Most notably, during this time period, we find small in-sample improvement incorporating AOT into our autoregressive model but little out-of-sample predictive improvement.

1. Limb-darkening coefficients from line-blanketed non-LTE hot-star model atmospheres

Science.gov (United States)

Reeve, D. C.; Howarth, I. D.

2016-02-01

We present grids of limb-darkening coefficients computed from non-local thermodynamic equilibrium (non-LTE), line-blanketed TLUSTY model atmospheres, covering effective-temperature and surface-gravity ranges of 15-55 kK and 4.75 dex (cgs) down to the effective Eddington limit, at 2×, 1×, 0.5× (Large Magellanic Cloud), 0.2× (Small Magellanic Cloud), and 0.1× solar. Results are given for the Bessell UBVRICJKHL, Sloan ugriz, Strömgren ubvy, WFCAM ZYJHK, Hipparcos, Kepler, and Tycho passbands, in each case characterized by several different limb-darkening `laws'. We examine the sensitivity of limb darkening to temperature, gravity, metallicity, microturbulent velocity, and wavelength, and make a comparison with LTE models. The dependence on metallicity is very weak, but limb darkening is a moderately strong function of log g in this temperature regime.

2. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

Science.gov (United States)

Wang, Wei; Griswold, Michael E

2016-11-30

The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

3. Stochastic modeling of phosphorus transport in the Three Gorges Reservoir by incorporating variability associated with the phosphorus partition coefficient

Energy Technology Data Exchange (ETDEWEB)

Huang, Lei; Fang, Hongwei; Xu, Xingya; He, Guojian; Zhang, Xuesong; Reible, Danny

2017-08-01

Phosphorus (P) fate and transport plays a crucial role in the ecology of rivers and reservoirs in which eutrophication is limited by P. A key uncertainty in models used to help manage P in such systems is the partitioning of P to suspended and bed sediments. By analyzing data from field and laboratory experiments, we stochastically characterize the variability of the partition coefficient (Kd) and derive spatio-temporal solutions for P transport in the Three Gorges Reservoir (TGR). We formulate a set of stochastic partial different equations (SPDEs) to simulate P transport by randomly sampling Kd from the measured distributions, to obtain statistical descriptions of the P concentration and retention in the TGR. The correspondence between predicted and observed P concentrations and P retention in the TGR combined with the ability to effectively characterize uncertainty suggests that a model that incorporates the observed variability can better describe P dynamics and more effectively serve as a tool for P management in the system. This study highlights the importance of considering parametric uncertainty in estimating uncertainty/variability associated with simulated P transport.

4. Longitudinal dispersion coefficients for numerical modeling of groundwater solute transport in heterogeneous formations.

Science.gov (United States)

Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K

2017-09-15

Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and concentration within a block is not resolved and the combined spreading effect is approximated using resolved quantities and macroscopic parameters. This applies whether the formation is modeled as homogeneous or discretized into homogeneous blocks but the emphasis here being on the latter. The process of dispersion is typically described through the Fickian model, i.e., the dispersive flux is proportional to the gradient of the resolved concentration, commonly with the Scheidegger parameterization, which is a particular way to compute the dispersion coefficients utilizing dispersivity coefficients. Although such parameterization is by far the most commonly used in solute transport applications, its validity has been questioned. Here, our goal is to investigate the effects of heterogeneity and mass transfer limitations on block-scale longitudinal dispersion and to evaluate under which conditions the Scheidegger parameterization is valid. We compute the relaxation time or memory of the system; changes in time with periods larger than the relaxation time are gradually leading to a condition of local equilibrium under which dispersion is Fickian. The method we use requires the solution of a steady-state advection-dispersion equation, and thus is computationally efficient, and applicable to any heterogeneous hydraulic conductivity K field without requiring statistical or structural assumptions. The method was validated by comparing with other approaches such as the moment analysis and the first order perturbation method. We investigate the impact of heterogeneity, both in degree and structure, on the longitudinal dispersion coefficient and then discuss the role of local dispersion

5. Application of Mathematical Models for Determination of Microorganisms Growth Rate Kinetic Coefficients for Wastewater Treatment Plant Evaluation

Directory of Open Access Journals (Sweden)

2017-06-01

Conclusion: Evaluation of Y, kd, k0 and Ks parameters in operation of Ekbatan wastewater treatment plant showed that ASM1 model could well determine the coefficients and therefore the conditions of biological treatment is appropriate.

6. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

Science.gov (United States)

Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

2018-04-01

Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

7. Random matrix model of adiabatic quantum computing

International Nuclear Information System (INIS)

Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

2005-01-01

We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size

8. Modeling of Electricity Demand for Azerbaijan: Time-Varying Coefficient Cointegration Approach

Directory of Open Access Journals (Sweden)

Jeyhun I. Mikayilov

2017-11-01

Full Text Available Recent literature has shown that electricity demand elasticities may not be constant over time and this has investigated using time-varying estimation methods. As accurate modeling of electricity demand is very important in Azerbaijan, which is a transitional country facing significant change in its economic outlook, we analyze whether the response of electricity demand to income and price is varying over time in this economy. We employed the Time-Varying Coefficient cointegration approach, a cutting-edge time-varying estimation method. We find evidence that income elasticity demonstrates sizeable variation for the period of investigation ranging from 0.48% to 0.56%. The study has some useful policy implications related to the income and price aspects of the electricity consumption in Azerbaijan.

9. Modeling and data analysis of the NASA-WSTF frictional heating apparatus - Effects of test parameters on friction coefficient

Science.gov (United States)

Zhu, Sheng-Hu; Stoltzfus, Joel M.; Benz, Frank J.; Yuen, Walter W.

1988-01-01

A theoretical model is being developed jointly by the NASA White Sands Test Facility (WSTF) and the University of California at Santa Barbara (UCSB) to analyze data generated from the WSTF frictional heating test facility. Analyses of the data generated in the first seconds of the frictional heating test are shown to be effective in determining the friction coefficient between the rubbing interfaces. Different friction coefficients for carobn steel and Monel K-500 are observed. The initial condition of the surface is shown to affect only the initial value of the friction coefficient but to have no significant influence on the average steady-state friction coefficient. Rotational speed and the formation of oxide film on the rotating surfaces are shown to have a significant effect on the friction coefficient.

10. Detailed Analysis of Amplitude and Slope Diffraction Coefficients for knife-edge structure in S-UTD-CH Model

Directory of Open Access Journals (Sweden)

Eray Arik

2017-03-01

Full Text Available In urban, rural and indoor applications, diffraction mechanism is very important to predict the field strength and calculate the coverage accurately. The diffraction mechanism takes place on NLOS (non-line-of-sight cases like rooftop, vertex, corner, edge and sharp surfaces. S-UTD-CH model computes three type of electromagnetic wave incidence such as direct, reflected and diffracted waves, respectively. As obstacles in diffraction geometry are in the same or closer height, contribution of the diffraction mechanism is dominant. To predict the diffracted fields accurately, amplitude and slope diffraction coefficients and the derivative of these coefficients have to be taken correctly. In this paper, all the derivations about diffraction coefficients are made for knife edge type structures and extensive simulations are performed in order to analyze the amplitude and diffraction coefficients. In plane angle diffraction, contributions of amplitude and slope diffraction coefficient are maxima.

11. Tyre-road friction coefficient estimation based on tyre sensors and lateral tyre deflection: modelling, simulations and experiments

Science.gov (United States)

Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco

2013-05-01

The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.

12. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

OpenAIRE

Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

1996-01-01

We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

13. Exploring the Influence of Neighborhood Characteristics on Burglary Risks: A Bayesian Random Effects Modeling Approach

Directory of Open Access Journals (Sweden)

Hongqiang Liu

2016-06-01

Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.

14. Dynamics of the Random Field Ising Model

Science.gov (United States)

Xu, Jian

The Random Field Ising Model (RFIM) is a general tool to study disordered systems. Crackling noise is generated when disordered systems are driven by external forces, spanning a broad range of sizes. Systems with different microscopic structures such as disordered mag- nets and Earth's crust have been studied under the RFIM. In this thesis, we investigated the domain dynamics and critical behavior in two dipole-coupled Ising ferromagnets Nd2Fe14B and LiHoxY 1-xF4. With Tc well above room temperature, Nd2Fe14B has shown reversible disorder when exposed to an external transverse field and crosses between two universality classes in the strong and weak disorder limits. Besides tunable disorder, LiHoxY1-xF4 has shown quantum tunneling effects arising from quantum fluctuations, providing another mechanism for domain reversal. Universality within and beyond power law dependence on avalanche size and energy were studied in LiHo0.65Y0.35 F4.

15. Effective scattering coefficient of the cerebral spinal fluid in adult head models for diffuse optical imaging

Science.gov (United States)

Custo, Anna; Wells, William M., III; Barnett, Alex H.; Hillman, Elizabeth M. C.; Boas, David A.

2006-07-01

An efficient computation of the time-dependent forward solution for photon transport in a head model is a key capability for performing accurate inversion for functional diffuse optical imaging of the brain. The diffusion approximation to photon transport is much faster to simulate than the physically correct radiative transport equation (RTE); however, it is commonly assumed that scattering lengths must be much smaller than all system dimensions and all absorption lengths for the approximation to be accurate. Neither of these conditions is satisfied in the cerebrospinal fluid (CSF). Since line-of-sight distances in the CSF are small, of the order of a few millimeters, we explore the idea that the CSF scattering coefficient may be modeled by any value from zero up to the order of the typical inverse line-of-sight distance, or approximately 0.3 mm-1, without significantly altering the calculated detector signals or the partial path lengths relevant for functional measurements. We demonstrate this in detail by using a Monte Carlo simulation of the RTE in a three-dimensional head model based on clinical magnetic resonance imaging data, with realistic optode geometries. Our findings lead us to expect that the diffusion approximation will be valid even in the presence of the CSF, with consequences for faster solution of the inverse problem.

16. Measurement and modeling the coefficient of restitution of char particles under simulated entrained flow gasifier conditions

Science.gov (United States)

Gibson, LaTosha M.

predict the coefficient of restitution (COR) which is the ratio of the rebound velocity to the impacting velocity (which is a necessary boundary condition for Discrete Phase Models). However, particle-wall impact models do not use actual geometries of char particles and motion of char particles due to gasifier operating conditions. This work attempts to include the surface geometry and rotation of the particles. To meet the objectives of this work, the general methodology used for this work involved (1) determining the likelihood of particle becoming entrapped, (2) assessing the limitations of particle-wall impact models for the COR through cold flow experiments in order to adapt them to the non-ideal conditions (surface and particle geometry) within a gasifier, (3) determining how to account for the influence of the carbon and the ash composition in the determination of the sticking probability of size fractions and specific gravities within a PSD and within the scope of particle wall impact models, and (4) using a methodology that quantifies the sticking probability (albeit a criterion or parameter) to predict the partitioning of a PSD into slag and flyash based on the proximate analysis. In this study, through sensitivity analysis the scenario for particle becoming entrapped within a slag layer was ruled out. Cold flow educator experiments were performed to measure the COR. Results showed a variation in the coefficient of restitution as a function of rebound angle due rotation of particles from the educator prior to impact. The particles were then simply dropped in "drop" experiments (without educator) to determine the influence of sphericity on particle rotation and therefore, the coefficient of restitution. The results showed that in addition to surface irregularities, the particle shape and orientation of the particle prior to impacting the target surface contributed to this variation of the coefficient of restitution as a function of rebounding angle. Oblique

17. Transport coefficient computation based on input/output reduced order models

Science.gov (United States)

Hurst, Joshua L.

The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator

18. Modeling the Radar Return of Powerlines Using an Incremental Length Diffraction Coefficient Approach

Science.gov (United States)

Macdonald, Douglas

DIRSIG consistently underestimated the scattered return, especially away from specular observation angles. This underestimation was particularly pronounced for the dihedral targets which have a low acceptance angle in elevation, probably caused by the lack of a physical optics capability in DIRSIG. Powerlines were not apparent in the simulated data. For modeling powerlines outside of DIRSIG using a standalone approach, an Incremental Length Diffraction Coefficient (ILDC) method was used. Traditionally, this method is used to model the scattered radiation from the edge of a wedge, for example the edges on the wings of a stealth aircraft. The Physical Theory of Diffraction provides the 2D diffraction coefficient and the ILDC method performs an integral along the edge to extend this solution to three dimensions. This research takes the ILDC approach but instead of using the wedge diffraction coefficient, the exact far-field diffraction coefficient for scattering from a finite length cylinder is used. Wavenumber-diameter products are limited to less than or about 10. For typical powerline diameters, this translates to X-band frequencies and lower. The advantage of this method is it allows exact 2D solutions to be extended to powerline geometries where sag is present and it is shown to be more accurate than a pure physical optics approach for frequencies lower than millimeter wave. The Radar Cross Sections produced by this method were accurate to within the experimental uncertainty of measured RF anechoic chamber data for both X and C-band frequencies across an 80 degree arc for 5 different target types and diameters. For the X-band data, the mean error was 6.0% for data with 9.5% measurement uncertainty. For the C-band data, the mean error was 11.8% for data with 14.3% measurement uncertainty. The best results were obtained for X-band data in the HH polarization channel within a 20 degree arc about normal incidence. For this configuration, a mean error of 3.0% for data with

19. Rebound coefficient of collisionless gas in a rigid vessel. A model of reflection of field-reversed configuration

International Nuclear Information System (INIS)

1996-01-01

A system of collisionless neutral gas contained in a rigid vessel is considered as a simple model of reflection of field-reversed configuration (FRC) plasma by a magnetic mirror. The rebound coefficient of the system is calculated as a function of the incident speed of the vessel normalized by the thermal velocity of the gas before reflection. The coefficient is compared with experimental data of FIX (Osaka U.) and FRX-C/T(Los Alamos N.L.). Agreement is good for this simple model. Interesting is that the rebound coefficient takes the smallest value (∼0.365) as the incident speed tends to zero and approaches unity as it tends to infinity. This behavior is reverse to that expected for a system with collision dominated fluid instead of collisionless gas. By examining the rebound coefficient, therefore, it could be successfully inferred whether the ion mean free path in each experiment was longer or shorter than the plasma length. (author)

20. Examining the physical meaning of the bank erosion coefficient used in meander migration modeling

Science.gov (United States)

Constantine, Candice R.; Dunne, Thomas; Hanson, Gregory J.

2009-05-01

Widely used models of meander evolution relate migration rate to vertically averaged near-bank velocity through the use of a coefficient of bank erosion ( E). In applications to floodplain management problems, E is typically determined through calibration to historical planform changes, and thus its physical meaning remains unclear. This study attempts to clarify the extent to which E depends on measurable physical characteristics of the channel boundary materials using data from the Sacramento River, California, USA. Bend-average values of E were calculated from measured long-term migration rates and computed near-bank velocities. In the field, unvegetated bank material resistance to fluvial shear ( k) was measured for four cohesive and noncohesive bank types using a jet-test device. At a small set of bends for which both E and k were obtained, we discovered that variability in k explains much of the variability in E. The form of this relationship suggests that when modeling long-term meander migration of large rivers, E depends largely on bank material properties. This finding opens up the possibility that E may be estimated directly from field data, enabling prediction of meander migration rates for systems where historical data are unavailable or controlling conditions have changed. Another implication is that vegetation plays a limited role in affecting long-term meander migration rates of large rivers like the Sacramento River. These hypotheses require further testing with data sets from other large rivers.

1. Homogenization of the coefficient of diffusion: influence of modelling and of the laplacian for fast power reactors and experimental mockups

International Nuclear Information System (INIS)

Gho, C.J.

1984-10-01

Neutron transport calculation of reactors is based on the definition of homogeneized cell constants, the diffusion coefficient among others. The formalism of the evaluation of the diffusion coefficient, as also the cell model used may introduced uncertainties in results. The present study allowed to estimate these uncertainties in the case of fast neutron power reactors and criticical mockups. The validation of new simple methods and the definition of references is a consequence of this work [fr

2. Turbulent eddy diffusion models in exposure assessment - Determination of the eddy diffusion coefficient.

Science.gov (United States)

Shao, Yuan; Ramachandran, Sandhya; Arnold, Susan; Ramachandran, Gurumurthy

2017-03-01

The use of the turbulent eddy diffusion model and its variants in exposure assessment is limited due to the lack of knowledge regarding the isotropic eddy diffusion coefficient, D T . But some studies have suggested a possible relationship between D T and the air changes per hour (ACH) through a room. The main goal of this study was to accurately estimate D T for a range of ACH values by minimizing the difference between the concentrations measured and predicted by eddy diffusion model. We constructed an experimental chamber with a spatial concentration gradient away from the contaminant source, and conducted 27 3-hr long experiments using toluene and acetone under different air flow conditions (0.43-2.89 ACHs). An eddy diffusion model accounting for chamber boundary, general ventilation, and advection was developed. A mathematical expression for the slope based on the geometrical parameters of the ventilation system was also derived. There is a strong linear relationship between D T and ACH, providing a surrogate parameter for estimating D T in real-life settings. For the first time, a mathematical expression for the relationship between D T and ACH has been derived that also corrects for non-ideal conditions, and the calculated value of the slope between these two parameters is very close to the experimentally determined value. The values of D T obtained from the experiments are generally consistent with values reported in the literature. They are also independent of averaging time of measurements, allowing for comparison of values obtained from different measurement settings. These findings make the use of turbulent eddy diffusion models for exposure assessment in workplace/indoor environments more practical.

3. Foetal dose conversion coefficients for ICRP-compliant pregnant models from idealised proton exposures

International Nuclear Information System (INIS)

Taranenko, V.; Xu, X. G.

2009-01-01

Protection of pregnant women and their foetus against external proton irradiations poses a unique challenge. Assessment of foetal dose due to external protons in galactic cosmic rays and as secondaries generated in aircraft walls is especially important during high-altitude flights. This paper reports a set of fluence to absorbed dose conversion coefficients for the foetus and its brain for external monoenergetic proton beams of six standard configurations (the antero-posterior, the postero-anterior, the right lateral, the left lateral, the rotational and the isotropic). The pregnant female anatomical definitions at each of the three gestational periods (3, 6 and 9 months) are based on newly developed RPI-P series of models whose organ masses were matched within 1% with the International Commission on Radiological Protection reference values. Proton interactions and the transport of secondary particles were carefully simulated using the Monte Carlo N-Particle extended code (MCNPX) and the phantoms consisting of several million voxels at 3 mm resolution. When choosing the physics models in the MCNPX, it was found that the advanced Cascade-Exciton intranuclear cascade model showed a maximum of 9% foetal dose increase compared with the default model combination at intermediate energies below 5 GeV. Foetal dose results from this study are tabulated and compared with previously published data that were based on simplified anatomy. The comparison showed a strong dependence upon the source geometry, energy and gestation period: The dose differences are typically less than 20% for all sources except ISO where systematically 40-80% of higher doses were observed. Below 200 MeV, a larger discrepancy in dose was found due to the Bragg peak shift caused by different anatomy. The tabulated foetal doses represent the latest and most detailed study to date offering a useful set of data to improve radiation protection dosimetry against external protons. (authors)

4. Semi-empirical model for heat transfer coefficient in liquid metal turbulent flow

International Nuclear Information System (INIS)

Fernandez y Fernandez, E.; Carajilescov, P.

1982-01-01

The heat transfer by forced convection in a metal liquid turbulent flow for circular ducts is analyzed. An analogy between the momentum and heat in the wall surface, is determined, aiming to determine an expression for heat transfer coefficient in function of the friction coefficient. (E.G.) [pt

5. Inter-annual and spatial variability of Hamon potential evapotranspiration model coefficients

Science.gov (United States)

McCabe, Gregory J.; Hay, Lauren E.; Bock, Andy; Markstrom, Steven L.; Atkinson, R. Dwight

2015-01-01

Monthly calibrated values of the Hamon PET coefficient (C) are determined for 109,951 hydrologic response units (HRUs) across the conterminous United States (U.S.). The calibrated coefficient values are determined by matching calculated mean monthly Hamon PET to mean monthly free-water surface evaporation. For most locations and months the calibrated coefficients are larger than the standard value reported by Hamon. The largest changes in the coefficients were for the late winter/early spring and fall months, whereas the smallest changes were for the summer months. Comparisons of PET computed using the standard value of C and computed using calibrated values of C indicate that for most of the conterminous U.S. PET is underestimated using the standard Hamon PET coefficient, except for the southeastern U.S.

6. Bottom friction models for shallow water equations: Manning’s roughness coefficient and small-scale bottom heterogeneity

Science.gov (United States)

Dyakonova, Tatyana; Khoperskov, Alexander

2018-03-01

The correct description of the surface water dynamics in the model of shallow water requires accounting for friction. To simulate a channel flow in the Chezy model the constant Manning roughness coefficient is frequently used. The Manning coefficient nM is an integral parameter which accounts for a large number of physical factors determining the flow braking. We used computational simulations in a shallow water model to determine the relationship between the Manning coefficient and the parameters of small-scale perturbations of a bottom in a long channel. Comparing the transverse water velocity profiles in the channel obtained in the models with a perturbed bottom without bottom friction and with bottom friction on a smooth bottom, we constructed the dependence of nM on the amplitude and spatial scale of perturbation of the bottom relief.

7. THE DETERMINATION OF BETA COEFFICIENTS OF PUBLICLY-HELD COMPANIES BY A REGRESSION MODEL AND AN APPLICATION ON PRIVATE FIRMS

Directory of Open Access Journals (Sweden)

METİN KAMİL ERCAN

2013-06-01

Full Text Available It is possible to determine the value of private companies by means of suggestions and assumptions derived from their financial statements. However, there comes out a serious problem in the determination of equity costs of these private companies using Capital Assets Pricing Model (CAPM as beta coefficients are unknown or unavailable. In this study, firstly, a regression model that represents the relationship between the beta coefficients and financial statements’ Variables of publicly-held companies will be developed. Then, this model will be tested and applied on private companies.

8. A random regret minimization model of travel choice

NARCIS (Netherlands)

Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.

2008-01-01

Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel

9. A random energy model for size dependence : recurrence vs. transience

NARCIS (Netherlands)

Külske, Christof

1998-01-01

We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as

10. Compensatory and non-compensatory multidimensional randomized item response models

NARCIS (Netherlands)

Fox, J.P.; Entink, R.K.; Avetisyan, M.

2014-01-01

Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been

11. Testing different decoupling coefficients with measurements and models of contrasting canopies and soil water conditions

Directory of Open Access Journals (Sweden)

V. Goldberg

2008-07-01

Full Text Available Four different approaches for the calculation of the well established decoupling coefficient Ω are compared using measurements at three experimental sites (Tharandt – spruce forest, Grillenburg and Melpitz – grass and simulations from the soil-vegetation boundary layer model HIRVAC. These investigations aimed to quantify differences between the calculation routines regarding their ability to describe the vegetation-atmosphere coupling of grass and forest with and without water stress.

The model HIRVAC used is a vertically highly resolved atmospheric boundary layer model, which includes vegetation. It is coupled with a single-leaf gas exchange model to simulate physiologically based reactions of different vegetation types to changing atmospheric conditions. A multilayer soil water module and a functional parameterisation are the base in order to link the stomata reaction of the gas exchange model to the change of soil water.

The omega factor was calculated for the basic formulation according to McNaughton and Jarvis (1983 and three modifications. To compare measurements and simulations for the above mentioned spruce and grass sites, the summer period 2007 as well as a dry period in June 2000 were used. Additionally a developing water stress situation for three forest canopies (spruce, pine and beech and for a grass site was simulated. The results showed large differences between the different omega approaches which depend on the vegetation type and the soil moisture.

Between the omega values, which were calculated by the used approach, the ranking was always the same not only for the measurements but also for the adapted simulations. The lowest values came from the first modification including doubling factors and summands in all parts of omega equation in relation to the original approach. And the highest values were calculated with the second modification missing one doubling factor in the denominator of the

12. Testing different decoupling coefficients with measurements and models of contrasting canopies and soil water conditions

Directory of Open Access Journals (Sweden)

V. Goldberg

2008-07-01

Full Text Available Four different approaches for the calculation of the well established decoupling coefficient Ω are compared using measurements at three experimental sites (Tharandt – spruce forest, Grillenburg and Melpitz – grass and simulations from the soil-vegetation boundary layer model HIRVAC. These investigations aimed to quantify differences between the calculation routines regarding their ability to describe the vegetation-atmosphere coupling of grass and forest with and without water stress. The model HIRVAC used is a vertically highly resolved atmospheric boundary layer model, which includes vegetation. It is coupled with a single-leaf gas exchange model to simulate physiologically based reactions of different vegetation types to changing atmospheric conditions. A multilayer soil water module and a functional parameterisation are the base in order to link the stomata reaction of the gas exchange model to the change of soil water. The omega factor was calculated for the basic formulation according to McNaughton and Jarvis (1983 and three modifications. To compare measurements and simulations for the above mentioned spruce and grass sites, the summer period 2007 as well as a dry period in June 2000 were used. Additionally a developing water stress situation for three forest canopies (spruce, pine and beech and for a grass site was simulated. The results showed large differences between the different omega approaches which depend on the vegetation type and the soil moisture. Between the omega values, which were calculated by the used approach, the ranking was always the same not only for the measurements but also for the adapted simulations. The lowest values came from the first modification including doubling factors and summands in all parts of omega equation in relation to the original approach. And the highest values were calculated with the second modification missing one doubling factor in the denominator of the omega equation. For example

13. Experiments and numerical simulations of flow field and heat transfer coefficients inside an autoclave model

Science.gov (United States)

Ghamlouch, T.; Roux, S.; Bailleul, J.-L.; Lefèvre, N.; Sobotka, V.

2017-10-01

Today's aerospace industrial first priority is the quality improvement of the composite material parts with the reduction of the manufacturing time in order to increase their quality/cost ratio. A fabrication method that could meet these specifications especially for large parts is the autoclave curing process. In fact the autoclave molding ensures the thermal control of the composite parts during the whole curing cycle. However the geometry of the tools as well as their positioning in the autoclave induce non uniform and complex flows around composite parts. This heterogeneity implies non-uniform heat transfers which can directly impact on part quality. One of the main challenges is therefore to describe the flow field inside an autoclave as well as the convective heat transfer from the heated pressurized gas to the composite part and the mold. For this purpose, and given the technical issues associated with instrumentation and measurements in actual autoclaves, an autoclave model was designed and then manufactured based on similarity laws. This tool allows the measurement of the flow field around representative real industrial molds using the PIV technique and the characterization of the heat transfer thanks to thermal instrumentation. The experimental results are then compared with those derived from numerical simulations using a commercial RANS CFD code. This study aims at developing a semi-empirical approach for the prediction of the heat transfer coefficient around the parts and therefore predicts its thermal history during the process with a view of optimization.

14. A Complex Network Model for Analyzing Railway Accidents Based on the Maximal Information Coefficient

International Nuclear Information System (INIS)

Shao Fu-Bo; Li Ke-Ping

2016-01-01

It is an important issue to identify important influencing factors in railway accident analysis. In this paper, employing the good measure of dependence for two-variable relationships, the maximal information coefficient (MIC), which can capture a wide range of associations, a complex network model for railway accident analysis is designed in which nodes denote factors of railway accidents and edges are generated between two factors of which MIC values are larger than or equal to the dependent criterion. The variety of network structure is studied. As the increasing of the dependent criterion, the network becomes to an approximate scale-free network. Moreover, employing the proposed network, important influencing factors are identified. And we find that the annual track density-gross tonnage factor is an important factor which is a cut vertex when the dependent criterion is equal to 0.3. From the network, it is found that the railway development is unbalanced for different states which is consistent with the fact. (paper)

15. Computation of infinite dilute activity coefficients of binary liquid alloys using complex formation model

Energy Technology Data Exchange (ETDEWEB)

Awe, O.E., E-mail: draweoe2004@yahoo.com; Oshakuade, O.M.

2016-04-15

A new method for calculating Infinite Dilute Activity Coefficients (γ{sup ∞}s) of binary liquid alloys has been developed. This method is basically computing γ{sup ∞}s from experimental thermodynamic integral free energy of mixing data using Complex formation model. The new method was first used to theoretically compute the γ{sup ∞}s of 10 binary alloys whose γ{sup ∞}s have been determined by experiments. The significant agreement between the computed values and the available experimental values served as impetus for applying the new method to 22 selected binary liquid alloys whose γ{sup ∞}s are either nonexistent or incomplete. In order to verify the reliability of the computed γ{sup ∞}s of the 22 selected alloys, we recomputed the γ{sup ∞}s using three other existing methods of computing or estimating γ{sup ∞}s and then used the γ{sup ∞}s obtained from each of the four methods (the new method inclusive) to compute thermodynamic activities of components of each of the binary systems. The computed activities were compared with available experimental activities. It is observed that the results from the method being proposed, in most of the selected alloys, showed better agreement with experimental activity data. Thus, the new method is an alternative and in certain instances, more reliable approach of computing γ{sup ∞}s of binary liquid alloys.

16. Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites

Science.gov (United States)

Olekhno, N. A.; Beltukov, Y. M.

2018-05-01

Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0

17. Modeling of the substrate and product transfer coefficients for ethanol fermentation

International Nuclear Information System (INIS)

Zerajic, S.; Grbavcic, Z.; Savkovic-Stevanovic, J.

2008-01-01

The transfer phenomena of the substrate and product for ethanol fermentation with immobilized biocatalyst were investigated. Fermentation was carried out with a biocatalyst consisting of Ca-alginate gel in the form of two-layer spherical beads in anaerobic conditions. The determination of kinetic parameters was achieved by fitting bioreaction progress curves to the experimental data. The calculation of the diffusion coefficients was performed by numerical methods for experimental conditions. Finally, the glucose and ethanol transfer coefficients are defined and determined, using the effective diffusion coefficients. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

18. Comparison of different models for the determination of the absorption and scattering coefficients of thermal barrier coatings

International Nuclear Information System (INIS)

Wang, Li; Eldridge, Jeffrey I.; Guo, S.M.

2014-01-01

The thermal radiative properties of thermal barrier coatings (TBCs) are becoming more important as the inlet temperatures of advanced gas-turbine engines are continuously being pushed higher in order to improve efficiency. To determine the absorption and scattering coefficients of TBCs, four-flux, two-flux and Kubelka–Munk models were introduced and used to characterize the thermal radiative properties of plasma-sprayed yttria-stabilized zirconia (YSZ) coatings. The results show that the absorption coefficient of YSZ is extremely low for wavelengths 200 μm suggests that when the coating thickness is larger than around twice the average scattering distance, the collimated flux can be simply treated as a diffuse flux inside the coating, and thus the two-flux model can be used to determine the absorption and scattering coefficients as a simplification of the four-flux model

19. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

Science.gov (United States)

Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

2018-04-01

Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

20. Endogeneity, Time-Varying Coefficients, and Incorrect vs. Correct Ways of Specifying the Error Terms of Econometric Models

Directory of Open Access Journals (Sweden)

P.A.V.B. Swamy

2017-02-01

Full Text Available Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.

1. Some random models in traffic science

Energy Technology Data Exchange (ETDEWEB)

Hjorth, U.

1996-06-01

We give an overview of stochastic models for the following traffic phenomena. Models for traffic flow including gaps and capacities for lanes, crossings and roundabouts. Models for wanted and achieved speed distributions. Mode selection models including dispersed equilibrium models and traffic accident models. Also some statistical questions are discussed. 60 refs, 1 tab

2. PREDICTING SOIL SORPTION COEFFICIENTS OF ORGANIC CHEMICALS USING A NEURAL NETWORK MODEL

Science.gov (United States)

The soil/sediment adsorption partition coefficient normalized to organic carbon (Koc) is extensively used to assess the fate of organic chemicals in hazardous waste sites. Several attempts have been made to estimate the value of Koc from chemical structure ...

3. A Model for Random Student Drug Testing

Science.gov (United States)

Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle

2011-01-01

The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…

4. Effects of large rate coefficients for ion-polar neutral reactions on chemical models of dense interstellar clouds

International Nuclear Information System (INIS)

Herbst, E.; Leung, C.M.; Rensselaer Polytechnic Institute, Troy, NY)

1986-01-01

Pseudo-time-dependent models of the gas phase chemistry of dense interstellar clouds have been run with large rate coefficients for reactions between ions and polar neutral species, as advocated by Adams, Smith, and Clary. The higher rate coefficients normally lead to a reduction in both the peak and steady state abundances of polar neutrals, which can be as large as an order of magnitude but is more often smaller. Other differences between the results of these models and previous results are also discussed. 38 references

5. Contribution to the neutronic theory of random stacks (diffusion coefficient and first-flight collision probabilities) with a general theorem on collision probabilities

International Nuclear Information System (INIS)

Dixmier, Marc.

1980-10-01

A general expression of the diffusion coefficient (d.c.) of neutrons was given, with stress being put on symmetries. A system of first-flight collision probabilities for the case of a random stack of any number of types of one- and two-zoned spherical pebbles, with an albedo at the frontiers of the elements or (either) consideration of the interstital medium, was built; to that end, the bases of collision probability theory were reviewed, and a wide generalisation of the reciprocity theorem for those probabilities was demonstrated. The migration area of neutrons was expressed for any random stack of convex, 'simple' and 'regular-contact' elements, taking into account the correlations between free-paths; the average cosinus of re-emission of neutrons by an element, in the case of a homogeneous spherical pebble and the transport approximation, was expressed; the superiority of the so-found result over Behrens' theory, for the type of media under consideration, was established. The 'fine structure current term' of the d.c. was also expressed, and it was shown that its 'polarisation term' is negligible. Numerical applications showed that the global heterogeneity effect on the d.c. of pebble-bed reactors is comparable with that for Graphite-moderated, Carbon gas-cooled, natural Uranium reactors. The code CARACOLE, which integrates all the results here obtained, was introduced [fr

6. Solvation free energies and partition coefficients with the coarse-grained and hybrid all-atom/coarse-grained MARTINI models.

Science.gov (United States)

Genheden, Samuel

2017-10-01

We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.

7. Solvation free energies and partition coefficients with the coarse-grained and hybrid all-atom/coarse-grained MARTINI models

Science.gov (United States)

Genheden, Samuel

2017-10-01

We present the estimation of solvation free energies of small solutes in water, n-octanol and hexane using molecular dynamics simulations with two MARTINI models at different resolutions, viz. the coarse-grained (CG) and the hybrid all-atom/coarse-grained (AA/CG) models. From these estimates, we also calculate the water/hexane and water/octanol partition coefficients. More than 150 small, organic molecules were selected from the Minnesota solvation database and parameterized in a semi-automatic fashion. Using either the CG or hybrid AA/CG models, we find considerable deviations between the estimated and experimental solvation free energies in all solvents with mean absolute deviations larger than 10 kJ/mol, although the correlation coefficient is between 0.55 and 0.75 and significant. There is also no difference between the results when using the non-polarizable and polarizable water model, although we identify some improvements when using the polarizable model with the AA/CG solutes. In contrast to the estimated solvation energies, the estimated partition coefficients are generally excellent with both the CG and hybrid AA/CG models, giving mean absolute deviations between 0.67 and 0.90 log units and correlation coefficients larger than 0.85. We analyze the error distribution further and suggest avenues for improvements.

8. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

Science.gov (United States)

Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

2016-11-01

We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

9. Quantifying geographic variation in the climatic drivers of midcontinent wetlands with a spatially varying coefficient model.

Science.gov (United States)

Roy, Christian

2015-01-01

The wetlands in the Prairie Pothole Region and in the Great Plains are notorious for their sensitivity to weather variability. These wetlands have been the focus of considerable attention because of their ecological importance and because of the expected impact of climate change. Few models in the literature, however, take into account spatial variation in the importance of wetland drivers. This is surprising given the importance spatial heterogeneity in geomorphology and climatic conditions have in the region. In this paper, I use spatially-varying coefficients to assess the variation in ecological drivers in a number of ponds observed over a 50-year period (1961-2012). I included the number of ponds observed the year before on a log scale, the log of total precipitation, and mean maximum temperature during the four previous seasons as explanatory variables. I also included a temporal component to capture change in the number of ponds due to anthropogenic disturbance. Overall, fall and spring precipitation were most important in pond abundance in the west, whereas winter and summer precipitation were the most important drivers in the east. The ponds in the east of the survey area were also more dependent on pond abundance during the previous year than those in the west. Spring temperature during the previous season influenced pond abundance; while the temperature during the other seasons had a limited effect. The ponds in the southwestern part of the survey area have been increasing independently of climatic conditions, whereas the ponds in the northeast have been steadily declining. My results underline the importance of accounting the spatial heterogeneity in environmental drivers, when working at large spatial scales. In light of my results, I also argue that assessing the impacts of climate change on wetland abundance in the spring, without more accurate climatic forecasting, will be difficult.

10. Quantifying geographic variation in the climatic drivers of midcontinent wetlands with a spatially varying coefficient model.

Directory of Open Access Journals (Sweden)

Christian Roy

Full Text Available The wetlands in the Prairie Pothole Region and in the Great Plains are notorious for their sensitivity to weather variability. These wetlands have been the focus of considerable attention because of their ecological importance and because of the expected impact of climate change. Few models in the literature, however, take into account spatial variation in the importance of wetland drivers. This is surprising given the importance spatial heterogeneity in geomorphology and climatic conditions have in the region. In this paper, I use spatially-varying coefficients to assess the variation in ecological drivers in a number of ponds observed over a 50-year period (1961-2012. I included the number of ponds observed the year before on a log scale, the log of total precipitation, and mean maximum temperature during the four previous seasons as explanatory variables. I also included a temporal component to capture change in the number of ponds due to anthropogenic disturbance. Overall, fall and spring precipitation were most important in pond abundance in the west, whereas winter and summer precipitation were the most important drivers in the east. The ponds in the east of the survey area were also more dependent on pond abundance during the previous year than those in the west. Spring temperature during the previous season influenced pond abundance; while the temperature during the other seasons had a limited effect. The ponds in the southwestern part of the survey area have been increasing independently of climatic conditions, whereas the ponds in the northeast have been steadily declining. My results underline the importance of accounting the spatial heterogeneity in environmental drivers, when working at large spatial scales. In light of my results, I also argue that assessing the impacts of climate change on wetland abundance in the spring, without more accurate climatic forecasting, will be difficult.

11. A novel drag force coefficient model for gas–water two-phase flows under different flow patterns

Energy Technology Data Exchange (ETDEWEB)

Shang, Zhi, E-mail: shangzhi@tsinghua.org.cn

2015-07-15

Graphical abstract: - Highlights: • A novel drag force coefficient model was established. • This model realized to cover different flow patterns for CFD. • Numerical simulations were performed under wide range flow regimes. • Validations were carried out through comparisons to experiments. - Abstract: A novel drag force coefficient model has been developed to study gas–water two-phase flows. In this drag force coefficient model, the terminal velocities were calculated through the revised drift flux model. The revised drift flux is different from the traditional drift flux model because the natural curve movement of the bubble was revised through considering the centrifugal force. Owing to the revisions, the revised drift flux model was to extend to 3D. Therefore it is suitable for CFD applications. In the revised drift flux model, the different flow patterns of the gas–water two-phase flows were able to be considered. This model innovatively realizes the drag force being able to cover different flow patterns of gas–water two-phase flows on bubbly flow, slug flow, churn flow, annular flow and mist flow. Through the comparisons of the numerical simulations to the experiments in vertical upward and downward pipe flows, this model was validated.

12. Organ dose conversion coefficients for voxel models of the reference male and female from idealized photon exposures

International Nuclear Information System (INIS)

Schlattl, H; Zankl, M; Petoussi-Henss, N

2007-01-01

A new series of organ equivalent dose conversion coefficients for whole body external photon exposure is presented for a standardized couple of human voxel models, called Rex and Regina. Irradiations from broad parallel beams in antero-posterior, postero-anterior, left- and right-side lateral directions as well as from a 360 deg. rotational source have been performed numerically by the Monte Carlo transport code EGSnrc. Dose conversion coefficients from an isotropically distributed source were computed, too. The voxel models Rex and Regina originating from real patient CT data comply in body and organ dimensions with the currently valid reference values given by the International Commission on Radiological Protection (ICRP) for the average Caucasian man and woman, respectively. While the equivalent dose conversion coefficients of many organs are in quite good agreement with the reference values of ICRP Publication 74, for some organs and certain geometries the discrepancies amount to 30% or more. Differences between the sexes are of the same order with mostly higher dose conversion coefficients in the smaller female model. However, much smaller deviations from the ICRP values are observed for the resulting effective dose conversion coefficients. With the still valid definition for the effective dose (ICRP Publication 60), the greatest change appears in lateral exposures with a decrease in the new models of at most 9%. However, when the modified definition of the effective dose as suggested by an ICRP draft is applied, the largest deviation from the current reference values is obtained in postero-anterior geometry with a reduction of the effective dose conversion coefficient by at most 12%

13. Organ dose conversion coefficients for voxel models of the reference male and female from idealized photon exposures

Science.gov (United States)

Schlattl, H.; Zankl, M.; Petoussi-Henss, N.

2007-04-01

A new series of organ equivalent dose conversion coefficients for whole body external photon exposure is presented for a standardized couple of human voxel models, called Rex and Regina. Irradiations from broad parallel beams in antero-posterior, postero-anterior, left- and right-side lateral directions as well as from a 360° rotational source have been performed numerically by the Monte Carlo transport code EGSnrc. Dose conversion coefficients from an isotropically distributed source were computed, too. The voxel models Rex and Regina originating from real patient CT data comply in body and organ dimensions with the currently valid reference values given by the International Commission on Radiological Protection (ICRP) for the average Caucasian man and woman, respectively. While the equivalent dose conversion coefficients of many organs are in quite good agreement with the reference values of ICRP Publication 74, for some organs and certain geometries the discrepancies amount to 30% or more. Differences between the sexes are of the same order with mostly higher dose conversion coefficients in the smaller female model. However, much smaller deviations from the ICRP values are observed for the resulting effective dose conversion coefficients. With the still valid definition for the effective dose (ICRP Publication 60), the greatest change appears in lateral exposures with a decrease in the new models of at most 9%. However, when the modified definition of the effective dose as suggested by an ICRP draft is applied, the largest deviation from the current reference values is obtained in postero-anterior geometry with a reduction of the effective dose conversion coefficient by at most 12%.

14. Numerical modelling of random walk one-dimensional diffusion

International Nuclear Information System (INIS)

Vamos, C.; Suciu, N.; Peculea, M.

1996-01-01

The evolution of a particle which moves on a discrete one-dimensional lattice, according to a random walk low, approximates better the diffusion process smaller the steps of the spatial lattice and time are. For a sufficiently large assembly of particles one can assume that their relative frequency at lattice knots approximates the distribution function of the diffusion process. This assumption has been tested by simulating on computer two analytical solutions of the diffusion equation: the Brownian motion and the steady state linear distribution. To evaluate quantitatively the similarity between the numerical and analytical solutions we have used a norm given by the absolute value of the difference of the two solutions. Also, a diffusion coefficient at any lattice knots and moment of time has been calculated, by using the numerical solution both from the diffusion equation and the particle flux given by Fick's low. The difference between diffusion coefficient of analytical solution and the spatial lattice mean coefficient of numerical solution constitutes another quantitative indication of the similarity of the two solutions. The results obtained show that the approximation depends first on the number of particles at each knot of the spatial lattice. In conclusion, the random walk is a microscopic process of the molecular dynamics type which permits simulations precision of the diffusion processes with given precision. The numerical method presented in this work may be useful both in the analysis of real experiments and for theoretical studies

15. Analog model for quantum gravity effects: phonons in random fluids.

Science.gov (United States)

Krein, G; Menezes, G; Svaiter, N F

2010-09-24

We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

16. A cluster expansion approach to exponential random graph models

International Nuclear Information System (INIS)

Yin, Mei

2012-01-01

The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

17. The Determinants of Gini Coefficient in Iran Based on Bayesian Model Averaging

Directory of Open Access Journals (Sweden)

Mohsen Mehrara

2015-03-01

Full Text Available This paper has tried to apply BMA approach in order to investigate important influential variables on Gini coefficient in Iran over the period 1976-2010. The results indicate that the GDP growth is the most important variable affecting the Gini coefficient and has a positive influence on it. Also the second and third effective variables on Gini coefficient are respectively the ratio of government current expenditure to GDP and the ratio of oil revenue to GDP which lead to an increase in inequality. This result is corresponding with rentier state theory in Iran economy. Injection of massive oil revenue to Iran's economy and its high share of the state budget leads to inefficient government spending and an increase in rent-seeking activities in the country. Economic growth is possibly a result of oil revenue in Iran economy which has caused inequality in distribution of income.

18. Integrable properties of a variable-coefficient Korteweg-de Vries model from Bose-Einstein condensates and fluid dynamics

International Nuclear Information System (INIS)

Zhang Chunyi; Gao Yitian; Meng Xianghua; Li Juan; Xu Tao; Wei Guangmei; Zhu Hongwu

2006-01-01

The phenomena of the trapped Bose-Einstein condensates related to matter waves and nonlinear atom optics can be governed by a variable-coefficient Korteweg-de Vries (vc-KdV) model with additional terms contributed from the inhomogeneity in the axial direction and the strong transverse confinement of the condensate, and such a model can also be used to describe the water waves propagating in a channel with an uneven bottom and/or deformed walls. In this paper, with the help of symbolic computation, the bilinear form for the vc-KdV model is obtained and some exact solitonic solutions including the N-solitonic solution in explicit form are derived through the extended Hirota method. We also derive the auto-Baecklund transformation, nonlinear superposition formula, Lax pairs and conservation laws of this model. Finally, the integrability of the variable-coefficient model and the characteristic of the nonlinear superposition formula are discussed

19. Premium Pricing of Liability Insurance Using Random Sum Model

Directory of Open Access Journals (Sweden)

Mujiati Dwi Kartikasari

2017-03-01

Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution

20. Organ dose conversion coefficients based on a voxel mouse model and MCNP code for external photon irradiation.

Science.gov (United States)

Zhang, Xiaomin; Xie, Xiangdong; Cheng, Jie; Ning, Jing; Yuan, Yong; Pan, Jie; Yang, Guoshan

2012-01-01

A set of conversion coefficients from kerma free-in-air to the organ absorbed dose for external photon beams from 10 keV to 10 MeV are presented based on a newly developed voxel mouse model, for the purpose of radiation effect evaluation. The voxel mouse model was developed from colour images of successive cryosections of a normal nude male mouse, in which 14 organs or tissues were segmented manually and filled with different colours, while each colour was tagged by a specific ID number for implementation of mouse model in Monte Carlo N-particle code (MCNP). Monte Carlo simulation with MCNP was carried out to obtain organ dose conversion coefficients for 22 external monoenergetic photon beams between 10 keV and 10 MeV under five different irradiation geometries conditions (left lateral, right lateral, dorsal-ventral, ventral-dorsal, and isotropic). Organ dose conversion coefficients were presented in tables and compared with the published data based on a rat model to investigate the effect of body size and weight on the organ dose. The calculated and comparison results show that the organ dose conversion coefficients varying the photon energy exhibits similar trend for most organs except for the bone and skin, and the organ dose is sensitive to body size and weight at a photon energy approximately <0.1 MeV.

1. Towards the Development of a Second-Order Approximation in Activity Coefficient Models Based on Group Contributions

DEFF Research Database (Denmark)

Abildskov, Jens; Constantinou, Leonidas; Gani, Rafiqul

1996-01-01

A simple modification of group contribution based models for estimation of liquid phase activity coefficients is proposed. The main feature of this modification is that contributions estimated from the present first-order groups in many instances are found insufficient since the first-order groups...... correlation/prediction capabilities, distinction between isomers and ability to overcome proximity effects....

2. Bivariate functional data clustering: grouping streams based on a varying coefficient model of the stream water and air temperature relationship

Science.gov (United States)

H. Li; X. Deng; Andy Dolloff; E. P. Smith

2015-01-01

A novel clustering method for bivariate functional data is proposed to group streams based on their waterâair temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...

3. Conditional Monte Carlo randomization tests for regression models.

Science.gov (United States)

Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

2014-08-15

We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

4. Definition of the Mathematical Model Coefficients on the Weld Size of Butt Joint Without Edge Preparation

Science.gov (United States)

Sidorov, Vladimir P.; Melzitdinova, Anna V.

2017-10-01

This paper represents the definition methods for thermal constants according to the data of the weld width under the normal-circular heat source. The method is based on isoline contouring of “effective power - temperature conductivity coefficient”. The definition of coefficients provides setting requirements to the precision of welding parameters support with the enough accuracy for an engineering practice.

5. Development of a model to determine mass transfer coefficient and oxygen solubility in bioreactors

Directory of Open Access Journals (Sweden)

Johnny Lee

2017-02-01

where T is in degree Kelvin, and the subscripts refer to degree Celsius; E, ρ, σ are properties of water. Furthermore, using data from published data on oxygen solubility in water, it was found that solubility bears a linear and inverse relationship with the mass transfer coefficient.

6. The ising model on the dynamical triangulated random surface

International Nuclear Information System (INIS)

Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.

1990-01-01

The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions

7. Semivarying coefficient models for capture-recapture data: colony size estimation for the little penguin Eudyptula minor.

Science.gov (United States)

Stoklosa, Jakub; Dann, Peter; Huggins, Richard

2014-09-01

To accommodate seasonal effects that change from year to year into models for the size of an open population we consider a time-varying coefficient model. We fit this model to a capture-recapture data set collected on the little penguin Eudyptula minor in south-eastern Australia over a 25 year period using Jolly-Seber type estimators and nonparametric P-spline techniques. The time-varying coefficient model identified strong changes in the seasonal pattern across the years which we further examined using functional data analysis techniques. To evaluate the methodology we also conducted several simulation studies that incorporate seasonal variation. Copyright © 2014 Elsevier Inc. All rights reserved.

8. The Analytical Objective Hysteresis Model (AnOHM v1.0: methodology to determine bulk storage heat flux coefficients

Directory of Open Access Journals (Sweden)

T. Sun

2017-07-01

Full Text Available The net storage heat flux (ΔQS is important in the urban surface energy balance (SEB but its determination remains a significant challenge. The hysteresis pattern of the diurnal relation between the ΔQS and net all-wave radiation (Q∗ has been captured in the Objective Hysteresis Model (OHM parameterization of ΔQS. Although successfully used in urban areas, the limited availability of coefficients for OHM hampers its application. To facilitate use, and enhance physical interpretations of the OHM coefficients, an analytical solution of the one-dimensional advection–diffusion equation of coupled heat and liquid water transport in conjunction with the SEB is conducted, allowing development of AnOHM (Analytical Objective Hysteresis Model. A sensitivity test of AnOHM to surface properties and hydrometeorological forcing is presented using a stochastic approach (subset simulation. The sensitivity test suggests that the albedo, Bowen ratio and bulk transfer coefficient, solar radiation and wind speed are most critical. AnOHM, driven by local meteorological conditions at five sites with different land use, is shown to simulate the ΔQS flux well (RMSE values of ∼ 30 W m−2. The intra-annual dynamics of OHM coefficients are explored. AnOHM offers significant potential to enhance modelling of the surface energy balance over a wider range of conditions and land covers.

9. Approximating prediction uncertainty for random forest regression models

Science.gov (United States)

John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

2016-01-01

Machine learning approaches such as random forest haveÂ increased for the spatial modeling and mapping of continuousÂ variables. Random forest is a non-parametric ensembleÂ approach, and unlike traditional regression approaches thereÂ is no direct quantification of prediction error. UnderstandingÂ prediction uncertainty is important when using model-basedÂ continuous maps as...

10. A random spatial network model based on elementary postulates

Science.gov (United States)

Karlinger, Michael R.; Troutman, Brent M.

1989-01-01

A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

11. Random regression models for detection of gene by environment interaction

Directory of Open Access Journals (Sweden)

Meuwissen Theo HE

2007-02-01

Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.

12. Extensive Investigations on Bio-Inspired Trust and Reputation Model over Hops Coefficient Factor in Distributed Wireless Sensor Networks

Directory of Open Access Journals (Sweden)

Vinod Kumar Verma

2014-08-01

Full Text Available Resource utilization requires a substantial consideration for a trust and reputation model to be deployed within a wireless sensor network (WSN. In the evaluation, our attention is focused on the effect of hops coefficient factor estimation on WSN with bio-inspired trust and reputation model (BTRM. We present the state-of-the-art system level evaluation of accuracy and path length of sensor node operations for their current and average scenarios. Additionally, we emphasized over the energy consumption evaluation for static, dynamic and oscillatory modes of BTRM-WSN model. The performance of the hops coefficient factor for our proposed framework is evaluated via analytic bounds and numerical simulations.

13. High-order dynamic modeling and parameter identification of structural discontinuities in Timoshenko beams by using reflection coefficients

Science.gov (United States)

Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue

2013-02-01

Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.

14. Disorder Identification in Hysteresis Data: Recognition Analysis of the Random-Bond-Random-Field Ising Model

International Nuclear Information System (INIS)

Ovchinnikov, O. S.; Jesse, S.; Kalinin, S. V.; Bintacchit, P.; Trolier-McKinstry, S.

2009-01-01

An approach for the direct identification of disorder type and strength in physical systems based on recognition analysis of hysteresis loop shape is developed. A large number of theoretical examples uniformly distributed in the parameter space of the system is generated and is decorrelated using principal component analysis (PCA). The PCA components are used to train a feed-forward neural network using the model parameters as targets. The trained network is used to analyze hysteresis loops for the investigated system. The approach is demonstrated using a 2D random-bond-random-field Ising model, and polarization switching in polycrystalline ferroelectric capacitors.

15. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

Energy Technology Data Exchange (ETDEWEB)

Lu, Fengbin, E-mail: fblu@amss.ac.cn [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Qiao, Han, E-mail: qiaohan@ucas.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Wang, Shouyang, E-mail: sywang@amss.ac.cn [School of Economics and Management, University of Chinese Academy of Sciences, Beijing 100190 (China); Lai, Kin Keung, E-mail: mskklai@cityu.edu.hk [Department of Management Sciences, City University of Hong Kong (Hong Kong); Li, Yuze, E-mail: richardyz.li@mail.utoronto.ca [Department of Industrial Engineering, University of Toronto (Canada)

2017-01-15

This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

16. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets

International Nuclear Information System (INIS)

Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

2017-01-01

This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.

17. Time-varying coefficient vector autoregressions model based on dynamic correlation with an application to crude oil and stock markets.

Science.gov (United States)

Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze

2017-01-01

This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright Â© 2016 Elsevier Ltd. All rights reserved.

18. A generalized model via random walks for information filtering

International Nuclear Information System (INIS)

Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

2016-01-01

There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

19. A generalized model via random walks for information filtering

Energy Technology Data Exchange (ETDEWEB)

Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

2016-08-06

There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

20. Money Creation in a Random Matching Model

OpenAIRE

Alexei Deviatov

2006-01-01

I study money creation in versions of the Trejos-Wright (1995) and Shi (1995) models with indivisible money and individual holdings bounded at two units. I work with the same class of policies as in Deviatov and Wallace (2001), who study money creation in that model. However, I consider an alternative notion of implementability–the ex ante pairwise core. I compute a set of numerical examples to determine whether money creation is beneficial. I find beneficial e?ects of money creation if indiv...

1. Random effects models in clinical research

NARCIS (Netherlands)

Cleophas, T. J.; Zwinderman, A. H.

2008-01-01

BACKGROUND: In clinical trials a fixed effects research model assumes that the patients selected for a specific treatment have the same true quantitative effect and that the differences observed are residual error. If, however, we have reasons to believe that certain patients respond differently

2. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

DEFF Research Database (Denmark)

Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre

2018-01-01

to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform...... coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear...

3. Incorporation of velocity-dependent restitution coefficient and particle surface friction into kinetic theory for modeling granular flow cooling.

Science.gov (United States)

Duan, Yifei; Feng, Zhi-Gang

2017-12-01

Kinetic theory (KT) has been successfully used to model rapid granular flows in which particle interactions are frictionless and near elastic. However, it fails when particle interactions become frictional and inelastic. For example, the KT is not able to accurately predict the free cooling process of a vibrated granular medium that consists of inelastic frictional particles under microgravity. The main reason that the classical KT fails to model these flows is due to its inability to account for the particle surface friction and its inelastic behavior, which are the two most important factors that need be considered in modeling collisional granular flows. In this study, we have modified the KT model that is able to incorporate these two factors. The inelasticity of a particle is considered by establishing a velocity-dependent expression for the restitution coefficient based on many experimental studies found in the literature, and the particle friction effect is included by using a tangential restitution coefficient that is related to the particle friction coefficient. Theoretical predictions of the free cooling process by the classical KT and the improved KT are compared with the experimental results from a study conducted on an airplane undergoing parabolic flights without the influence of gravity [Y. Grasselli, G. Bossis, and G. Goutallier, Europhys. Lett. 86, 60007 (2009)10.1209/0295-5075/86/60007]. Our results show that both the velocity-dependent restitution coefficient and the particle surface friction are important in predicting the free cooling process of granular flows; the modified KT model that integrates these two factors is able to improve the simulation results and leads to better agreement with the experimental results.

4. Simple Analytical Forms of the Perpendicular Diffusion Coefficient for Two-component Turbulence. III. Damping Model of Dynamical Turbulence

Energy Technology Data Exchange (ETDEWEB)

Gammon, M.; Shalchi, A., E-mail: andreasm4@yahoo.com [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada)

2017-10-01

In several astrophysical applications one needs analytical forms of cosmic-ray diffusion parameters. Some examples are studies of diffusive shock acceleration and solar modulation. In the current article we explore perpendicular diffusion based on the unified nonlinear transport theory. While we focused on magnetostatic turbulence in Paper I, we included the effect of dynamical turbulence in Paper II of the series. In the latter paper we assumed that the temporal correlation time does not depend on the wavenumber. More realistic models have been proposed in the past, such as the so-called damping model of dynamical turbulence. In the present paper we derive analytical forms for the perpendicular diffusion coefficient of energetic particles in two-component turbulence for this type of time-dependent turbulence. We present new formulas for the perpendicular diffusion coefficient and we derive a condition for which the magnetostatic result is recovered.

5. Money creation process in a random redistribution model

Science.gov (United States)

Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

2014-01-01

In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

6. Utility based maintenance analysis using a Random Sign censoring model

International Nuclear Information System (INIS)

Andres Christen, J.; Ruggeri, Fabrizio; Villa, Enrique

2011-01-01

Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.

7. Computation of diffusion coefficients for waters of Gauthami Godavari estuary using one-dimensional advection-diffusion model

Digital Repository Service at National Institute of Oceanography (India)

Jyothi, D.; Murty, T.V.R.; Sarma, V.V.; Rao, D.P.

conditions. As the pollutant load on the estuary increases, the. water quality may deteriorate rapidly and therefore the scientific interests are centered on the analysis of water quality. The pollutants will be subjected to a number of physical, chemical... study we have applied one-dimensional advection-diffusion model for the waters of Gauthami Godavari estuary to determine the axial diffusion coefficients and thereby to predict the impact assessment. The study area (Fig. 1) is the lower most 32 km...

8. Application of inverse models and XRD analysis to the determination of Ti-17 beta-phase Coefficients of Thermal Expansion

OpenAIRE

Fréour , Sylvain; Gloaguen , David; François , Marc; Guillén , Ronald

2006-01-01

International audience; The scope of this work is the determination of the coefficients of thermal expansion of the Ti-17 beta-phase. A rigorous inverse thermo-elastic self-consistent scale transition inicro-mechanical model extended to multi-phase materials was used. The experimental data required for the application of the inverse method were obtained from both the available literature and especially dedicated X-ray diffraction lattice strain measurements performed on the studied (alpha + b...

9. Application of inverse models and XRD analysis to the determination of Ti-17 {beta}-phase coefficients of thermal expansion

Energy Technology Data Exchange (ETDEWEB)

Freour, S. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France)]. E-mail: freour@crttsn.univ-nantes.fr; Gloaguen, D. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France); Francois, M. [Laboratoire des Systemes Mecaniques et d' Ingenierie Simultanee (LASMIS FRE CNRS 2719), Universite de Technologie de Troyes, 12 Rue Marie Curie, BP 2060, 10010 Troyes (France); Guillen, R. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France)

2006-04-15

scope of this work is the determination of the coefficients of thermal expansion of the Ti-17 {beta}-phase. A rigorous inverse thermo-elastic self-consistent scale transition micro-mechanical model extended to multi-phase materials was used. The experimental data required for the application of the inverse method were obtained from both the available literature and especially dedicated X-ray diffraction lattice strain measurements performed on the studied ({alpha} + {beta}) two-phase titanium alloy.

10. Entrainment coefficient and effective mass for conduction neutrons in neutron star crust: simple microscopic models

International Nuclear Information System (INIS)

Carter, Brandon; Chamel, Nicolas; Haensel, Pawel

2005-01-01

In the inner crust of a neutron star, at densities above the 'drip' threshold, unbound 'conduction' neutrons can move freely past through the ionic lattice formed by the nuclei. The relative current density ni=nv-bar i of such conduction neutrons will be related to the corresponding mean particle momentum pi by a proportionality relation of the form ni=Kpi in terms of a physically well defined mobility coefficient K whose value in this context has not been calculated before. Using methods from ordinary solid state and nuclear physics, a simple quantum mechanical treatment based on the independent particle approximation, is used here to formulate K as the phase space integral of the relevant group velocity over the neutron Fermi surface. The result can be described as an 'entrainment' that changes the ordinary neutron mass m to a macroscopic effective mass per neutron that will be given-subject to adoption of a convention specifying the precise number density n of the neutrons that are considered to be 'free'-by m-bar =n/K. The numerical evaluation of the mobility coefficient is carried out for nuclear configurations of the 'lasagna' and 'spaghetti' type that may be relevant at the base of the crust. Extrapolation to the middle layers of the inner crust leads to the unexpected prediction that m-bar will become very large compared with m

11. (Non-) Gibbsianness and Phase Transitions in Random Lattice Spin Models

NARCIS (Netherlands)

Külske, C.

1999-01-01

We consider disordered lattice spin models with finite-volume Gibbs measures µΛ[η](dσ). Here σ denotes a lattice spin variable and η a lattice random variable with product distribution P describing the quenched disorder of the model. We ask: when will the joint measures limΛ↑Zd P(dη)µΛ[η](dσ) be

12. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

DEFF Research Database (Denmark)

Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

2003-01-01

A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...

13. Simulating intrafraction prostate motion with a random walk model

Directory of Open Access Journals (Sweden)

Tobias Pommer, PhD

2017-07-01

Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.

14. Single-cluster dynamics for the random-cluster model

NARCIS (Netherlands)

Deng, Y.; Qian, X.; Blöte, H.W.J.

2009-01-01

We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

15. Application of Poisson random effect models for highway network screening.

Science.gov (United States)

Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

2014-02-01

In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

16. A note on moving average models for Gaussian random fields

DEFF Research Database (Denmark)

Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

17. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

Science.gov (United States)

Rosenblum, Michael; van der Laan, Mark J.

2010-01-01

Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

18. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

Science.gov (United States)

Rosenblum, Michael; van der Laan, Mark J

2010-04-01

Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

19. First-principles calculations of impurity diffusion coefficients in dilute Mg alloys using the 8-frequency model

International Nuclear Information System (INIS)

Ganeshan, S.; Hector, L.G.; Liu, Z.-K.

2011-01-01

Research highlights: → Implemented the eight frequency model for impurity diffusion in hexagonal metals. → Model inputs were energetics/vibrational properties from first princples. → Predicted diffusion coefficients for Al, Ca, Zn and Sn impurity diffusion in Mg. → Successful prediction of partial correlation factors and jump frequencies. → Good agreement between calculated and experimental results. - Abstract: Diffusion in dilute Mg-X alloys, where X denotes Al, Zn, Sn and Ca impurities, was investigated with first-principles density functional theory in the local density approximation. Impurity diffusion coefficients were computed as a function of temperature using the 8-frequency model which provided the relevant impurity and solvent (Mg) jump frequencies and correlation factors. Minimum energy pathways for impurity diffusion and associated saddle point structures were computed with the climbing image nudged elastic band method. Vibrational properties were obtained with the supercell (direct) method for lattice dynamics. Calculated diffusion coefficients were compared with available experimental data. For diffusion between basal planes, we find D Mg-Ca > D Mg-Zn > D Mg-Sn > D Mg-Al, where D is the diffusion coefficient. For diffusion within a basal plane, the same trend holds except that D Mg-Zn overlaps with D Mg-Al at high temperatures and D Mg-Sn at low temperatures. These trends were explored with charge density contours in selected planes of each Mg-X alloy, the variation of the activation energy for diffusion with the atomic radius of each impurity and the electronic density of states. The theoretical methodology developed herein can be applied to impurity diffusion in other hexagonal materials.

20. A kinetic model of droplet heating and evaporation: Effects of inelastic collisions and a non-unity evaporation coefficient

KAUST Repository

Sazhin, Sergei S.

2013-01-01

The previously developed kinetic model for droplet heating and evaporation into a high pressure air is generalised to take into account the combined effects of inelastic collisions between molecules in the kinetic region, a non-unity evaporation coefficient and temperature gradient inside droplets. It is pointed out that for the parameters typical for Diesel engine-like conditions, the heat flux in the kinetic region is a linear function of the vapour temperature at the outer boundary of this region, but practically does not depend on vapour density at this boundary for all models, including and not including the effects of inelastic collisions, and including and not including the effects of a non-unity evaporation coefficient. For any given temperature at the outer boundary of the kinetic region the values of the heat flux are shown to decrease with increasing numbers of internal degrees of freedom of the molecules. The rate of this decrease is strong for small numbers of these degrees of freedom but negligible when the number of these degrees exceeds 20. This allows us to restrict the analysis to the first 20 arbitrarily chosen degrees of freedom of n-dodecane molecules when considering the effects of inelastic collisions. The mass flux at this boundary decreases almost linearly with increasing vapour density at the same location for all above-mentioned models. For any given vapour density at the outer boundary of the kinetic region the values of the mass flux are smaller for the model, taking into account the contribution of internal degrees of freedom, than for the model ignoring these degrees of freedom. It is shown that the effects of inelastic collisions lead to stronger increase in the predicted droplet evaporation time in Diesel engine-like conditions relative to the hydrodynamic model, compared with the similar increase predicted by the kinetic model considering only elastic collisions. The effects of a non-unity evaporation coefficient are shown to be

1. The hard-core model on random graphs revisited

International Nuclear Information System (INIS)

Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka

2013-01-01

We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming

2. Lamplighter model of a random copolymer adsorption on a line

Directory of Open Access Journals (Sweden)

L.I. Nazarov

2014-09-01

Full Text Available We present a model of an AB-diblock random copolymer sequential self-packaging with local quenched interactions on a one-dimensional infinite sticky substrate. It is assumed that the A-A and B-B contacts are favorable, while A-B are not. The position of a newly added monomer is selected in view of the local contact energy minimization. The model demonstrates a self-organization behavior with the nontrivial dependence of the total energy, E (the number of unfavorable contacts, on the number of chain monomers, N: E ~ N^3/4 for quenched random equally probable distribution of A- and B-monomers along the chain. The model is treated by mapping it onto the "lamplighter" random walk and the diffusion-controlled chemical reaction of X+X → 0 type with the subdiffusive motion of reagents.

3. Some Limits Using Random Slope Models to Measure Academic Growth

Directory of Open Access Journals (Sweden)

Daniel B. Wright

2017-11-01

Full Text Available Academic growth is often estimated using a random slope multilevel model with several years of data. However, if there are few time points, the estimates can be unreliable. While using random slope multilevel models can lower the variance of the estimates, these procedures can produce more highly erroneous estimates—zero and negative correlations with the true underlying growth—than using ordinary least squares estimates calculated for each student or school individually. An example is provided where schools with increasing graduation rates are estimated to have negative growth and vice versa. The estimation is worse when the underlying data are skewed. It is recommended that there are at least six time points for estimating growth if using a random slope model. A combination of methods can be used to avoid some of the aberrant results if it is not possible to have six or more time points.

4. The random field Blume-Capel model revisited

Science.gov (United States)

Santos, P. V.; da Costa, F. A.; de Araújo, J. M.

2018-04-01

We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.

5. Establishment of detailed eye model of adult chinese male and dose conversion coefficients calculation under neutron exposure

International Nuclear Information System (INIS)

Zhu, Hongyu; Qiu, Rui; Ren, Li; Zhang, Hui; Li, Junli; Wu, Zhen; Li, Chunyan

2017-01-01

The human eye lens is sensitive to radiation. ICRP-118 publication recommended a reduction of the occupational annual equivalent dose limit from 150 to 20 mSv, averaged over defined periods of 5 y. Therefore, it is very important to build a detailed eye model for the accurate dose assessment and radiation risk evaluation of eye lens. In this work, a detailed eye model was build based on the characteristic anatomic parameters of the Chinese adult male. This eye model includes seven main structures, which are scleral, choroid, lens, iris, cornea, vitreous body and aqueous humor. The lens was divided into sensitive volume and insensitive volume based on different cell populations. The detailed eye model was incorporated into the converted polygon-mesh version of the Chinese reference adult male whole-body surface model. After the incorporation, dose conversion coefficients for the eye lens were calculated for neutron exposure at AP, PA and LAT geometries with Geant4, the neutron energies were from 0.001 eV to 10 MeV. The calculated lens dose coefficients were compared with those of ICRP-116 publication. Significant differences up to 97.47% were found at PA geometry. This could mainly be attributed to the different geometry characteristic of eye model and parameters of head in different phantom between the present work and ICRP-116 publication. (authors)

6. ESTABLISHMENT OF DETAILED EYE MODEL OF ADULT CHINESE MALE AND DOSE CONVERSION COEFFICIENTS CALCULATION UNDER NEUTRON EXPOSURE.

Science.gov (United States)

Zhu, Hongyu; Qiu, Rui; Wu, Zhen; Ren, Li; Li, Chunyan; Zhang, Hui; Li, Junli

2017-12-01

7. Dose conversion coefficients for monoenergetic electrons incident on a realistic human eye model with different lens cell populations.

Science.gov (United States)

Nogueira, P; Zankl, M; Schlattl, H; Vaz, P

2011-11-07

The radiation-induced posterior subcapsular cataract has long been generally accepted to be a deterministic effect that does not occur at doses below a threshold of at least 2 Gy. Recent epidemiological studies indicate that the threshold for cataract induction may be much lower or that there may be no threshold at all. A thorough study of this subject requires more accurate dose estimates for the eye lens than those available in ICRP Publication 74. Eye lens absorbed dose per unit fluence conversion coefficients for electron irradiation were calculated using a geometrical model of the eye that takes into account different cell populations of the lens epithelium, together with the MCNPX Monte Carlo radiation transport code package. For the cell population most sensitive to ionizing radiation-the germinative cells-absorbed dose per unit fluence conversion coefficients were determined that are up to a factor of 4.8 higher than the mean eye lens absorbed dose conversion coefficients for electron energies below 2 MeV. Comparison of the results with previously published values for a slightly different eye model showed generally good agreement for all electron energies. Finally, the influence of individual anatomical variability was quantified by positioning the lens at various depths below the cornea. A depth difference of 2 mm between the shallowest and the deepest location of the germinative zone can lead to a difference between the resulting absorbed doses of up to nearly a factor of 5000 for electron energy of 0.7 MeV.

8. Three-dimensional transport coefficient model and prediction-correction numerical method for thermal margin analysis of PWR cores

International Nuclear Information System (INIS)

Chiu, C.

1981-01-01

Combustion Engineering Inc. designs its modern PWR reactor cores using open-core thermal-hydraulic methods where the mass, momentum and energy equations are solved in three dimensions (one axial and two lateral directions). The resultant fluid properties are used to compute the minimum Departure from Nuclear Boiling Ratio (DNBR) which ultimately sets the power capability of the core. The on-line digital monitoring and protection systems require a small fast-running algorithm of the design code. This paper presents two techniques used in the development of the on-line DNB algorithm. First, a three-dimensional transport coefficient model is introduced to radially group the flow subchannel into channels for the thermal-hydraulic fluid properties calculation. Conservation equations of mass, momentum and energy for this channels are derived using transport coefficients to modify the calculation of the radial transport of enthalpy and momentum. Second, a simplified, non-iterative numerical method, called the prediction-correction method, is applied together with the transport coefficient model to reduce the computer execution time in the determination of fluid properties. Comparison of the algorithm and the design thermal-hydraulic code shows agreement to within 0.65% equivalent power at a 95/95 confidence/probability level for all normal operating conditions of the PWR core. This algorithm accuracy is achieved with 1/800th of the computer processing time of its parent design code. (orig.)

9. Dose conversion coefficients for monoenergetic electrons incident on a realistic human eye model with different lens cell populations

International Nuclear Information System (INIS)

Nogueira, P; Vaz, P; Zankl, M; Schlattl, H

2011-01-01

The radiation-induced posterior subcapsular cataract has long been generally accepted to be a deterministic effect that does not occur at doses below a threshold of at least 2 Gy. Recent epidemiological studies indicate that the threshold for cataract induction may be much lower or that there may be no threshold at all. A thorough study of this subject requires more accurate dose estimates for the eye lens than those available in ICRP Publication 74. Eye lens absorbed dose per unit fluence conversion coefficients for electron irradiation were calculated using a geometrical model of the eye that takes into account different cell populations of the lens epithelium, together with the MCNPX Monte Carlo radiation transport code package. For the cell population most sensitive to ionizing radiation-the germinative cells-absorbed dose per unit fluence conversion coefficients were determined that are up to a factor of 4.8 higher than the mean eye lens absorbed dose conversion coefficients for electron energies below 2 MeV. Comparison of the results with previously published values for a slightly different eye model showed generally good agreement for all electron energies. Finally, the influence of individual anatomical variability was quantified by positioning the lens at various depths below the cornea. A depth difference of 2 mm between the shallowest and the deepest location of the germinative zone can lead to a difference between the resulting absorbed doses of up to nearly a factor of 5000 for electron energy of 0.7 MeV.

10. Effects of random noise in a dynamical model of love

Energy Technology Data Exchange (ETDEWEB)

Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

2011-07-15

Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

11. Effects of random noise in a dynamical model of love

International Nuclear Information System (INIS)

Xu Yong; Gu Rencai; Zhang Huiqing

2011-01-01

Highlights: → We model the complexity and unpredictability of psychology as Gaussian white noise. → The stochastic system of love is considered including bifurcation and chaos. → We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

12. Homotopy perturbation transform method for pricing under pure diffusion models with affine coefficients

Directory of Open Access Journals (Sweden)

Claude Rodrigue Bambe Moutsinga

2018-01-01

Full Text Available Most existing multivariate models in finance are based on diffusion models. These models typically lead to the need of solving systems of Riccati differential equations. In this paper, we introduce an efficient method for solving systems of stiff Riccati differential equations. In this technique, a combination of Laplace transform and homotopy perturbation methods is considered as an algorithm to the exact solution of the nonlinear Riccati equations. The resulting technique is applied to solving stiff diffusion model problems that include interest rates models as well as two and three-factor stochastic volatility models. We show that the present approach is relatively easy, efficient and highly accurate.

13. Modelling the effective diffusion coefficient of anions in Callovo-Oxfordian argillite knowing the microstructure of the rock

International Nuclear Information System (INIS)

Diaz, N.

2009-01-01

After having presented the issue of radioactive waste storage, the concept of geological storage and its application in the Meuse/Haute-Marne underground laboratory, and described the Callovo-Oxfordian geological formation and the argillite transport properties, this research thesis aims at developing a prediction of these properties at a macroscopic scale for water and anions. A first part presents the different experimental means implemented to acquire the diffusion coefficients for the studied materials (Callovo-Oxfordian argillite and purified Puy illite), and the spatial organisation of minerals by LIBS probe-based mapping to highlight a relationship between rock microstructure and its transport macroscopic properties. The next part presents the models which have been developed at the nanometer and micrometre scale to predict the diffusion coefficients. Experimental results are then compared with computed values

14. Using Random Forest Models to Predict Organizational Violence

Science.gov (United States)

Levine, Burton; Bobashev, Georgly

2012-01-01

We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

15. Atmospheric stability-dependent infinite wind-farm models and the wake-decay coefficient

OpenAIRE

Peña, Alfredo; Rathmann, Ole

2014-01-01

We extend the infinite wind-farm boundary-layer (IWFBL) model of Frandsen to take into account atmospheric static stability effects. This extended model is compared with the IWFBL model of Emeis and to the Park wake model used inWind Atlas Analysis and Application Program (WAsP), which is computed for an infinite wind farm. The models show similar behavior for the wind-speed reduction when accounting for a number of surface roughness lengths, turbine to turbine separations and wind speeds und...

16. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

Science.gov (United States)

2016-11-01

The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

17. Improved models for the prediction of activity coefficients in nearly athermal mixtures: Part I. Empirical modifications of free-volume models

DEFF Research Database (Denmark)

Kontogeorgis, Georgios M.; Coutsikos, Philipos; Tassios, Dimitrios

1994-01-01

Mixtures containing exclusively normal, branched and cyclic alkanes, as well as saturated hydrocarbon polymers (e.g. poly(ethylene) and poly(isobutylene)), are known to exhibit almost athermal behavior. Several new activity coefficient models containing both combinatorial and free-volume contribu......Mixtures containing exclusively normal, branched and cyclic alkanes, as well as saturated hydrocarbon polymers (e.g. poly(ethylene) and poly(isobutylene)), are known to exhibit almost athermal behavior. Several new activity coefficient models containing both combinatorial and free...

18. REMOTE SENSING AND SURFACE ENERGY FLUX MODELS TO DERIVE EVAPOTRANSPIRATION AND CROP COEFFICIENT

Directory of Open Access Journals (Sweden)

Salvatore Barbagallo

2008-06-01

Full Text Available Remote sensing techniques using high resolution satellite images provide opportunities to evaluate daily crop water use and its spatial and temporal distribution on a field by field basis. Mapping this indicator with pixels of few meters of size on extend areas allows to characterize different processes and parameters. Satellite data on vegetation reflectance, integrated with in field measurements of canopy coverage features and the monitoring of energy fluxes through the soil-plant-atmosphere system, allow to estimate conventional irrigation components (ET, Kc thus improving irrigation strategies. In the study, satellite potential evapotranspiration (ETp and crop coefficient (Kc maps of orange orchards are derived using semi-empirical approaches between reflectance data from IKONOS imagery and ground measurements of vegetation features. The monitoring of energy fluxes through the orchard allows to estimate actual crop evapotranspiration (ETa using energy balance and the Surface Renewal theory. The approach indicates substantial promise as an efficient, accurate and relatively inexpensive procedure to predict actual ET fluxes and Kc from irrigated lands.

19. Factorisations for partition functions of random Hermitian matrix models

International Nuclear Information System (INIS)

Jackson, D.M.; Visentin, T.I.

1996-01-01

The partition function Z N , for Hermitian-complex matrix models can be expressed as an explicit integral over R N , where N is a positive integer. Such an integral also occurs in connection with random surfaces and models of two dimensional quantum gravity. We show that Z N can be expressed as the product of two partition functions, evaluated at translated arguments, for another model, giving an explicit connection between the two models. We also give an alternative computation of the partition function for the φ 4 -model.The approach is an algebraic one and holds for the functions regarded as formal power series in the appropriate ring. (orig.)

20. Painleve analysis and transformations for a generalized two-dimensional variable-coefficient Burgers model from fluid mechanics, acoustics and cosmic-ray astrophysics

International Nuclear Information System (INIS)

Wei, Guang-Mei

2006-01-01

Generalized two-dimensional variable-coefficient Burgers model is of current value in fluid mechanics, acoustics and cosmic-ray astrophysics. In this paper, Painleve analysis leads to the constraints on the variable coefficients for such a model to pass the Painleve test and to an auto-Baecklund transformation. Moreover, four transformations from this model are constructed, to the standard two-dimensional and one-dimensional Burgers models with the relevant constraints on the variable coefficients via symbolic computation. By virtue of the given transformations the properties and solutions of this model can be obtained from those of the standard two-dimensional and one-dimensional ones

1. Drag Coefficient Comparisons Between Observed and Model Simulated Directional Wave Spectra Under Hurricane Conditions

Science.gov (United States)

2016-04-19

the Wave Model (WAM; Hasselmann t al., 1988 ), and Simulating Waves Nearshore ( SWAN ; Booij et al., 999...of the circle represents the maximum wind speed of the hurricane. The black lines in the vicinity of the hurricane track represent the aircraft...contour maps and black contour lines for the model spec- ra at the same location. Then, the model spectra energy exceeds RA pk are plotted as

2. Atmospheric stability-dependent infinite wind-farm models and the wake-decay coefficient

DEFF Research Database (Denmark)

Peña, Alfredo; Rathmann, Ole

2014-01-01

We extend the infinite wind-farm boundary-layer (IWFBL) model of Frandsen to take into account atmospheric static stability effects. This extended model is compared with the IWFBL model of Emeis and to the Park wake model used inWind Atlas Analysis and Application Program (WAsP), which is computed......) larger than the adjusted values for a wide range of neutral to stable atmospheric stability conditions, a number of roughness lengths and turbine separations lower than _ 10 rotor diameters and (ii) too large compared with those obtained by a semiempirical formulation (relating the ratio of the friction...

3. Measurement of infinite dilution activity coefficient and application of modified ASOG model for solvent-polymer systems

Energy Technology Data Exchange (ETDEWEB)

Choi, B.; Choi, J. [Kwangwoon University, Seoul (Korea, Republic of); Tochigi, K.; Kojima, K. [Nihon University, Tokyo (Japan)

1996-04-20

A gas chromatographic method was used in order to measure vapor-liquid equilibria for solvent (1)-polymer (2) systems in which the polymers were polystyrene, poly(a-methyl) styrene and the advents were benzene toluene cyclohexane methylisobutylketone, ethylacetate, and vinylacetate. The activity coefficients of solvents for solvent (1)-polymer (2) systems were measured at infinite dilution and the modified ASOG (Analytical Solution of Group) model was suggested to describe vapor-liquid equilibria of those systems within a range of temperatures 423.15K through 498.15K. The model consists of the original ASOG and the free volume term. An external degree of freedom in the free volume term empirically became to a C1={alpha}+{beta}/T as a function of temperature. Each tern in the modified ASOG model is based on the weight fraction. The external degree of freedom in the model was estimated by experimental data within a range of temperatures. As a result of doing it the infinite dilution activity coefficients calculated were agreed with the experimental data within an error of 0.1%. 27 refs., 3 figs., 7 tabs.

4. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

Science.gov (United States)

Chad Babcock; Andrew O. Finley; John B. Bradford; Randy Kolka; Richard Birdsey; Michael G. Ryan

2015-01-01

Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both...

5. Statistical properties of several models of fractional random point processes

Science.gov (United States)

Bendjaballah, C.

2011-08-01

Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

6. Statistical shape model with random walks for inner ear segmentation

DEFF Research Database (Denmark)

Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

2016-01-01

is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...

7. Asthma Self-Management Model: Randomized Controlled Trial

Science.gov (United States)

Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo

2016-01-01

Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…

8. The dilute random field Ising model by finite cluster approximation

International Nuclear Information System (INIS)

Benyoussef, A.; Saber, M.

1987-09-01

Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs

9. Learning of couplings for random asymmetric kinetic Ising models revisited: random correlation matrices and learning curves

International Nuclear Information System (INIS)

Bachschmid-Romano, Ludovica; Opper, Manfred

2015-01-01

We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)

10. Gas/aerosol Partitioning Parameterisation For Global Modelling: A Physical Interpretation of The Relationship Between Activity Coefficients and Relative Humidity

Science.gov (United States)

Metzger, S.; Dentener, F. J.; Lelieveld, J.; Pandis, S. N.

A computationally efficient model (EQSAM) to calculate gas/aerosol partitioning ofsemi-volatile inorganic aerosol components has been developed for use in global- atmospheric chemistry and climate models; presented at the EGS 2001.We introduce and discuss here the physics behind the parameterisation, upon whichthe EQuilib- rium Simplified Aerosol Model EQSAM is based. The parameterisation,which ap- proximates the activity coefficient calculation sufficiently accurately forglobal mod- elling, is based on a method that directly relates aerosol activitycoefficients to the ambient relative humidity, assuming chemical equilibrium.It therefore provides an interesting alternative for the computationally expensiveiterative activity coefficient calculation methods presently used in thermodynamicgas/aerosol equilibrium mod- els (EQMs). The parameterisation can be used,however, also in dynamical models that calculate mass transfer between theliquid/solid aerosol phases and the gas/phase explicitly; dynamical models oftenincorporate an EQM to calculate the aerosol com- position. The gain of theparameterisation is that the entire system of the gas/aerosol equilibrium partitioningcan be solved non-iteratively, a substantial advantage in global modelling.Since we have already demonstrated at the EGS 2001 that EQSAM yields similarresults as current state-of-the-art equilibrium models, we focus here on a dis- cussionof our physical interpretation of the parameterisation; the identification of theparameters needed is crucial. Given the lag of reliable data, the best way tothor- oughly validate the parameterisation for global modelling applications is theimple- mentation in current state-of-the-art gas/aerosol partitioning routines, whichare embe- ded in e.g. a global atmospheric chemistry transport model, by comparingthe results of the parameterisation against the ones based on the widely used activitycoefficient calculation methods (i.e. Bromley, Kussik-Meissner or Pitzer). Then

11. Quantum random oracle model for quantum digital signature

Science.gov (United States)

Shang, Tao; Lei, Qi; Liu, Jianwei

2016-10-01

The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

12. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

DEFF Research Database (Denmark)

Kock, Anders Bredahl

This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

13. Cap integration in spectral gravity forward modelling: near- and far-zone gravity effects via Molodensky's truncation coefficients

Science.gov (United States)

Bucha, Blažej; Hirt, Christian; Kuhn, Michael

2018-04-01

Spectral gravity forward modelling is a technique that converts a band-limited topography into its implied gravitational field. This conversion implicitly relies on global integration of topographic masses. In this paper, a modification of the spectral technique is presented that provides gravity effects induced only by the masses located inside or outside a spherical cap centred at the evaluation point. This is achieved by altitude-dependent Molodensky's truncation coefficients, for which we provide infinite series expansions and recurrence relations with a fixed number of terms. Both representations are generalized for an arbitrary integer power of the topography and arbitrary radial derivative. Because of the altitude-dependency of the truncation coefficients, a straightforward synthesis of the near- and far-zone gravity effects at dense grids on irregular surfaces (e.g. the Earth's topography) is computationally extremely demanding. However, we show that this task can be efficiently performed using an analytical continuation based on the gradient approach, provided that formulae for radial derivatives of the truncation coefficients are available. To demonstrate the new cap-modified spectral technique, we forward model the Earth's degree-360 topography, obtaining near- and far-zone effects on gravity disturbances expanded up to degree 3600. The computation is carried out on the Earth's surface and the results are validated against an independent spatial-domain Newtonian integration (1 μGal RMS agreement). The new technique is expected to assist in mitigating the spectral filter problem of residual terrain modelling and in the efficient construction of full-scale global gravity maps of highest spatial resolution.

14. Investigating Facebook Groups through a Random Graph Model

OpenAIRE

Dinithi Pallegedara; Lei Pan

2014-01-01

Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...

15. development of a new drag coefficient model for oil and gas

African Journals Online (AJOL)

eobe

approximation of experimental data for e. R , from .... dynamic conditions in order to evaluate the drag and ... based on the experimental data for multiphase, water- oil-gas flow see ... Figure 6: Comparison of measured and model prediction.

16. Stochastic geometry, spatial statistics and random fields models and algorithms

CERN Document Server

2015-01-01

Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

17. A simple model for retrieving bare soil moisture from radar-scattering coefficients

International Nuclear Information System (INIS)

Chen, K.S.; Yen, S.K.; Huang, W.P.

1995-01-01

A simple algorithm based on a rough surface scattering model was developed to invert the bare soil moisture content from active microwave remote sensing data. In the algorithm development, a frequency mixing model was used to relate soil moisture to the dielectric constant. In particular, the Integral Equation Model (IEM) was used over a wide range of surface roughness and radar frequencies. To derive the algorithm, a sensitivity analysis was performed using a Monte Carlo simulation to study the effects of surface parameters, including height variance, correlation length, and dielectric constant. Because radar return is inherently dependent on both moisture content and surface roughness, the purpose of the sensitivity testing was to select the proper radar parameters so as to optimally decouple these two factors, in an attempt to minimize the effects of one while the other was observed. As a result, the optimal radar parameter ranges can be chosen for the purpose of soil moisture content inversion. One thousand samples were then generated with the IEM model followed by multivariate linear regression analysis to obtain an empirical soil moisture model. Numerical comparisons were made to illustrate the inversion performance using experimental measurements. Results indicate that the present algorithm is simple and accurate, and can be a useful tool for the remote sensing of bare soil surfaces. (author)

18. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

Science.gov (United States)

Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

2011-03-01

During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

19. A weakly nonlinear model with exact coefficients for the fluttering and spiraling motions of buoyancy-driven bodies

Science.gov (United States)

Magnaudet, Jacques; Tchoufag, Joel; Fabre, David

2015-11-01

Gravity/buoyancy-driven bodies moving in a slightly viscous fluid frequently follow fluttering or helical paths. Current models of such systems are largely empirical and fail to predict several of the key features of their evolution, especially close to the onset of path instability. Using a weakly nonlinear expansion of the full set of governing equations, we derive a new generic reduced-order model of this class of phenomena based on a pair of amplitude equations with exact coefficients that drive the evolution of the first pair of unstable modes. We show that the predictions of this model for the style (eg. fluttering or spiraling) and characteristics (eg. frequency and maximum inclination angle) of path oscillations compare well with various recent data for both solid disks and air bubbles.

20. Weakly Nonlinear Model with Exact Coefficients for the Fluttering and Spiraling Motion of Buoyancy-Driven Bodies

Science.gov (United States)

Tchoufag, Joël; Fabre, David; Magnaudet, Jacques

2015-09-01

Gravity- or buoyancy-driven bodies moving in a slightly viscous fluid frequently follow fluttering or helical paths. Current models of such systems are largely empirical and fail to predict several of the key features of their evolution, especially close to the onset of path instability. Here, using a weakly nonlinear expansion of the full set of governing equations, we present a new generic reduced-order model based on a pair of amplitude equations with exact coefficients that drive the evolution of the first pair of unstable modes. We show that the predictions of this model for the style (e.g., fluttering or spiraling) and characteristics (e.g., frequency and maximum inclination angle) of path oscillations compare well with various recent data for both solid disks and air bubbles.

1. A generalized model via random walks for information filtering

Science.gov (United States)

Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

2016-08-01

There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

2. Longitudinal dispersion coefficients for numerical modeling of groundwater solute transport in heterogeneous formations

DEFF Research Database (Denmark)

Lee, Jonghyun; Rolle, Massimo; Kitanidis, Peter K.

2018-01-01

Most recent research on hydrodynamic dispersion in porous media has focused on whole-domain dispersion while other research is largely on laboratory-scale dispersion. This work focuses on the contribution of a single block in a numerical model to dispersion. Variability of fluid velocity and conc...

3. Inventory implications of using sampling variances in estimation of growth model coefficients

Science.gov (United States)

Albert R. Stage; William R. Wykoff

2000-01-01

Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...

4. Quantitative coating thickness determination using a coefficient-independent hyperspectral scattering model

NARCIS (Netherlands)

2017-01-01

Background
Hyperspectral imaging is a technique that enables the mapping of spectral signatures across a surface. It is most commonly used for surface chemical mapping in fields as diverse as satellite remote sensing, biomedical imaging and heritage science. Existing models, such as the

5. Creating, generating and comparing random network models with NetworkRandomizer.

Science.gov (United States)

Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

2016-01-01

Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

6. Development of polyparameter linear free energy relationship models for octanol-air partition coefficients of diverse chemicals.

Science.gov (United States)

Jin, Xiaochen; Fu, Zhiqiang; Li, Xuehua; Chen, Jingwen

2017-03-22

The octanol-air partition coefficient (K OA ) is a key parameter describing the partition behavior of organic chemicals between air and environmental organic phases. As the experimental determination of K OA is costly, time-consuming and sometimes limited by the availability of authentic chemical standards for the compounds to be determined, it becomes necessary to develop credible predictive models for K OA . In this study, a polyparameter linear free energy relationship (pp-LFER) model for predicting K OA at 298.15 K and a novel model incorporating pp-LFERs with temperature (pp-LFER-T model) were developed from 795 log K OA values for 367 chemicals at different temperatures (263.15-323.15 K), and were evaluated with the OECD guidelines on QSAR model validation and applicability domain description. Statistical results show that both models are well-fitted, robust and have good predictive capabilities. Particularly, the pp-LFER model shows a strong predictive ability for polyfluoroalkyl substances and organosilicon compounds, and the pp-LFER-T model maintains a high predictive accuracy within a wide temperature range (263.15-323.15 K).

7. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

Science.gov (United States)

Janssen, Dirk P

2012-03-01

Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

8. Model for definition of heat transfer coefficient in an annular two-phase flow

International Nuclear Information System (INIS)

Khun, J.

1976-01-01

Near-wall heat exchange in a vertical tube at high vapor velocity in a two-phase vapor and liquid flow is investigated. The flow divides inside the tube into a near-wall liquid film and a vapor nucleus containing liquid droplets, with the boundaries being uniform. The liquid film thickness determines the main resistance during heat transfer between the wall and vapor nucleus. The theoretical model presented is verified in water vaporization experiments, the R12 cooling agent and certain hydrocarbons. The loss of friction pressure is determined by the Lockart-Martinelli method. The approximately universal Carman velocity profile is used to evaluate the velocity in film, and basing on this, film thickness is determined. The parameter ranges were: Resub(vap)=10 4 -3x10 6 , Resub(liq.)=0.9-10. The theoretical model ensures good correlation with the experiment

9. Complex dynamics of an eco-epidemiological model with different competition coefficients and weak Allee in the predator

International Nuclear Information System (INIS)

Saifuddin, Md.; Biswas, Santanu; Samanta, Sudip; Sarkar, Susmita; Chattopadhyay, Joydev

2016-01-01

The paper explores an eco-epidemiological model with weak Allee in predator, and the disease in the prey population. We consider a predator-prey model with type II functional response. The curiosity of this paper is to consider different competition coefficients within the prey population, which leads to the emergent carrying capacity. We perform the local and global stability analysis of the equilibrium points and the Hopf bifurcation analysis around the endemic equilibrium point. Further we pay attention to the chaotic dynamics which is produced by disease. Our numerical simulations reveal that the three species eco-epidemiological system without weak-Allee induced chaos from stable focus for increasing the force of infection, whereas in the presence of the weak-Allee effect, it exhibits stable solution. We conclude that chaotic dynamics can be controlled by the Allee parameter as well as the competition coefficients. We apply basic tools of non-linear dynamics such as Poincare section and maximum Lyapunov exponent to identify chaotic behavior of the system.

10. Activity coefficients at infinite dilution of hydrocarbons in glycols: Experimental data and thermodynamic modeling with the GCA-EoS

International Nuclear Information System (INIS)

González Prieto, Mariana; Williams-Wynn, Mark D.; Bahadur, Indra; Sánchez, Francisco A.; Mohammadi, Amir H.

2017-01-01

Highlights: • Experimental infinite dilution activity coefficients of hydrocarbons in glycols. • Inverse gas-liquid chromatography technique. • Solutes investigated include n-alkanes, 1-alkenes, and cycloalkanes. • Highly non-ideal systems are modeled with the GCA-EoS. - Abstract: The infinite dilution activity coefficients for 12 non-polar hydrocarbon solutes in the solvents, monoethylene and diethylene glycol, were measured using the gas-liquid chromatography technique. Pre-saturation of the carrier gas was required to avoid solvent loss from the chromatographic column during the measurements that were carried out at T = (303.15, 313.15 and 323.15) K for monoethylene glycol and at T = (304.15, 313.15 and 323.15) K for diethylene glycol. The solutes investigated include n-alkanes, 1-alkenes, and cycloalkanes. The new data are compared with the highly scattered data that is available in the open literature. Finally, these highly non-ideal systems are modeled with the GCA-EoS.

11. Collocation methods for uncertainty quanti cation in PDE models with random data

KAUST Repository

Nobile, Fabio

2014-01-06

In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be over-constraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

12. Scaling of coercivity in a 3d random anisotropy model

Energy Technology Data Exchange (ETDEWEB)

Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.

2015-06-15

The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.

13. Modeling random combustion of lycopodium particles and gas

Directory of Open Access Journals (Sweden)

2016-06-01

Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.

14. Emergent randomness in the Jaynes-Cummings model

International Nuclear Information System (INIS)

Garraway, B M; Stenholm, S

2008-01-01

We consider the well-known Jaynes-Cummings model and ask if it can display randomness. As a solvable Hamiltonian system, it does not display chaotic behaviour in the ordinary sense. Here, however, we look at the distribution of values taken up during the total time evolution. This evolution is determined by the eigenvalues distributed as the square roots of integers and leads to a seemingly erratic behaviour. That this may display a random Gaussian value distribution is suggested by an exactly provable result by Kac. In order to reach our conclusion we use the Kac model to develop tests for the emergence of a Gaussian. Even if the consequent double limits are difficult to evaluate numerically, we find definite indications that the Jaynes-Cummings case also produces a randomness in its value distributions. Numerical methods do not establish such a result beyond doubt, but our conclusions are definite enough to suggest strongly an unexpected randomness emerging in a dynamic time evolution

15. A Fay-Herriot Model with Different Random Effect Variances

Czech Academy of Sciences Publication Activity Database

Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

2011-01-01

Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

16. Gas-particle partitioning of semi-volatile organics on organic aerosols using a predictive activity coefficient model: analysis of the effects of parameter choices on model performance

Science.gov (United States)

Chandramouli, Bharadwaj; Jang, Myoseon; Kamens, Richard M.

The partitioning of a diverse set of semivolatile organic compounds (SOCs) on a variety of organic aerosols was studied using smog chamber experimental data. Existing data on the partitioning of SOCs on aerosols from wood combustion, diesel combustion, and the α-pinene-O 3 reaction was augmented by carrying out smog chamber partitioning experiments on aerosols from meat cooking, and catalyzed and uncatalyzed gasoline engine exhaust. Model compositions for aerosols from meat cooking and gasoline combustion emissions were used to calculate activity coefficients for the SOCs in the organic aerosols and the Pankow absorptive gas/particle partitioning model was used to calculate the partitioning coefficient Kp and quantitate the predictive improvements of using the activity coefficient. The slope of the log K p vs. log p L0 correlation for partitioning on aerosols from meat cooking improved from -0.81 to -0.94 after incorporation of activity coefficients iγ om. A stepwise regression analysis of the partitioning model revealed that for the data set used in this study, partitioning predictions on α-pinene-O 3 secondary aerosol and wood combustion aerosol showed statistically significant improvement after incorporation of iγ om, which can be attributed to their overall polarity. The partitioning model was sensitive to changes in aerosol composition when updated compositions for α-pinene-O 3 aerosol and wood combustion aerosol were used. The octanol-air partitioning coefficient's ( KOA) effectiveness as a partitioning correlator over a variety of aerosol types was evaluated. The slope of the log K p- log K OA correlation was not constant over the aerosol types and SOCs used in the study and the use of KOA for partitioning correlations can potentially lead to significant deviations, especially for polar aerosols.

17. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

International Nuclear Information System (INIS)

Nagasaki, S.

2013-01-01

Based on the sorption distribution coefficients (K d ) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K d values of Cs onto Mizunami granite carried out by JAEA with the K d values predicted by the model, the effect of the ionic strength on the K d values of Cs onto granite was evaluated. It was found that K d values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10 -2 to 5 x 10 -1 mol/dm 3 . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

18. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

Energy Technology Data Exchange (ETDEWEB)

Nagasaki, S., E-mail: nagasas@mcmaster.ca [McMaster Univ., Hamilton, Ontario (Canada)

2013-07-01

Based on the sorption distribution coefficients (K{sub d}) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K{sub d} values of Cs onto Mizunami granite carried out by JAEA with the K{sub d} values predicted by the model, the effect of the ionic strength on the K{sub d} values of Cs onto granite was evaluated. It was found that K{sub d} values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10{sup -2} to 5 x 10{sup -1} mol/dm{sup 3} . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

19. Numerical study of the influence of flow blockage on the aerodynamic coefficients of models in low-speed wind tunnels

Science.gov (United States)

Bui, V. T.; Kalugin, V. T.; Lapygin, V. I.; Khlupnov, A. I.

2017-11-01

With the use of ANSYS Fluent software and ANSYS ICEM CFD calculation grid generator, the flows past a wing airfoil, an infinite cylinder, and 3D blunted bodies located in the open and closed test sections of low-speed wind tunnels were calculated. The mathematical model of the flows included the Reynolds equations and the SST model of turbulence. It was found that the ratios between the aerodynamic coefficients in the test section and in the free (unbounded) stream could be fairly well approximated with a piecewise-linear function of the blockage factor, whose value weakly depended on the angle of attack. The calculated data and data gained in the analysis of previously reported experimental studies proved to be in a good agreement. The impact of the extension of the closed test section on the airfoil lift force is analyzed.

20. Distribution coefficients (Kd's) for use in risk assessment models of the Kara Sea.

Science.gov (United States)

Carroll, J; Boisson, F; Teyssie, J L; King, S E; Krosshavn, M; Carroll, M L; Fowler, S W; Povinec, P P; Baxter, M S

1999-07-01

As a prerequisite for most evaluations of radionuclide transport pathways in marine systems, it is necessary to obtain basic information on the sorption potential of contaminants onto particulate matter. Kd values for use in modeling radionuclide dispersion in the Kara Sea have been determined as part of several international programs addressing the problem of radioactive debris residing in Arctic Seas. Field and laboratory Kd experiments were conducted for the following radionuclides associated with nuclear waste: americium, europium, plutonium, cobalt, cesium and strontium. Emphasis has been placed on two regions in the Kara Sea: (i) the Novaya Zemlya Trough (NZT) and (ii) the mixing zones of the Ob and Yenisey Rivers (RMZ). Short-term batch Kd experiments were performed at-sea on ambient water column samples and on samples prepared both at-sea and in the laboratory by mixing filtered bottom water with small amounts of surficial bottom sediments (particle concentrations in samples = 1-30 mg/l). Within both regions, Kd values for individual radionuclides vary over two to three orders of magnitude. The relative particle affinities for radionuclides in the two regions are americium approximately equal to europium > plutonium > cobalt > cesium > strontium. The values determined in this study agree with minimum values given in the IAEA Technical Report [IAEA, 1985. Sediment Kd's and Concentration Factors for Radionuclides in the Marine Environment. Technical Report No. 247. International Atomic Energy Agency, Vienna.]. Given the importance of Kd's in assessments of critical transport pathways for radionuclide contaminants, we recommend that Kd ranges of values for specific elements rather than single mean values be incorporated into model simulations of radionuclide dispersion.

1. The transverse spin-1 Ising model with random interactions

Energy Technology Data Exchange (ETDEWEB)

Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)

2009-01-15

The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.

2. Random unitary evolution model of quantum Darwinism with pure decoherence

Science.gov (United States)

2015-10-01

We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

3. QSPR models for predicting generator-column-derived octanol/water and octanol/air partition coefficients of polychlorinated biphenyls.

Science.gov (United States)

Yuan, Jintao; Yu, Shuling; Zhang, Ting; Yuan, Xuejie; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

2016-06-01

Octanol/water (K(OW)) and octanol/air (K(OA)) partition coefficients are two important physicochemical properties of organic substances. In current practice, K(OW) and K(OA) values of some polychlorinated biphenyls (PCBs) are measured using generator column method. Quantitative structure-property relationship (QSPR) models can serve as a valuable alternative method of replacing or reducing experimental steps in the determination of K(OW) and K(OA). In this paper, two different methods, i.e., multiple linear regression based on dragon descriptors and hologram quantitative structure-activity relationship, were used to predict generator-column-derived log K(OW) and log K(OA) values of PCBs. The predictive ability of the developed models was validated using a test set, and the performances of all generated models were compared with those of three previously reported models. All results indicated that the proposed models were robust and satisfactory and can thus be used as alternative models for the rapid assessment of the K(OW) and K(OA) of PCBs. Copyright © 2016 Elsevier Inc. All rights reserved.

4. Spatial variability in the coefficient of thermal expansion induces pre-service stresses in computer models of virgin Gilsocarbon bricks

International Nuclear Information System (INIS)

Arregui-Mena, José David; Margetts, Lee; Griffiths, D.V.; Lever, Louise; Hall, Graham; Mummery, Paul M.

2015-01-01

In this paper, the authors test the hypothesis that tiny spatial variations in material properties may lead to significant pre-service stresses in virgin graphite bricks. To do this, they have customised ParaFEM, an open source parallel finite element package, adding support for stochastic thermo-mechanical analysis using the Monte Carlo Simulation method. For an Advanced Gas-cooled Reactor brick, three heating cases have been examined: a uniform temperature change; a uniform temperature gradient applied through the thickness of the brick and a simulated temperature profile from an operating reactor. Results are compared for mean and stochastic properties. These show that, for the proof-of-concept analyses carried out, the pre-service von Mises stress is around twenty times higher when spatial variability of material properties is introduced. The paper demonstrates that thermal gradients coupled with material incompatibilities may be important in the generation of stress in nuclear graphite reactor bricks. Tiny spatial variations in coefficient of thermal expansion (CTE) and Young's modulus can lead to the presence of thermal stresses in bricks that are free to expand. - Highlights: • Open source software has been modified to include random variability in CTE and Young's modulus. • The new software closely agrees with analytical solutions and commercial software. • Spatial variations in CTE and Young's modulus produce stresses that do not occur with mean values. • Material variability may induce pre-service stress in virgin graphite.

5. Spatial variability in the coefficient of thermal expansion induces pre-service stresses in computer models of virgin Gilsocarbon bricks

Energy Technology Data Exchange (ETDEWEB)

Arregui-Mena, José David, E-mail: jose.arreguimena@postgrad.manchester.ac.uk [School of Mechanical, Aerospace, and Civil Engineering, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom); Margetts, Lee, E-mail: lee.margetts@manchester.ac.uk [School of Mechanical, Aerospace, and Civil Engineering, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom); Griffiths, D.V., E-mail: d.v.griffiths@mines.edu [Colorado School of Mines, 1500 Illinois St, Golden, CO 80401 (United States); Lever, Louise, E-mail: louise.lever@manchester.ac.uk [Research Computing, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom); Hall, Graham, E-mail: graham.n.hall@manchester.ac.uk [School of Mechanical, Aerospace, and Civil Engineering, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom); Mummery, Paul M., E-mail: paul.m.mummery@manchester.ac.uk [School of Mechanical, Aerospace, and Civil Engineering, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)

2015-10-15

In this paper, the authors test the hypothesis that tiny spatial variations in material properties may lead to significant pre-service stresses in virgin graphite bricks. To do this, they have customised ParaFEM, an open source parallel finite element package, adding support for stochastic thermo-mechanical analysis using the Monte Carlo Simulation method. For an Advanced Gas-cooled Reactor brick, three heating cases have been examined: a uniform temperature change; a uniform temperature gradient applied through the thickness of the brick and a simulated temperature profile from an operating reactor. Results are compared for mean and stochastic properties. These show that, for the proof-of-concept analyses carried out, the pre-service von Mises stress is around twenty times higher when spatial variability of material properties is introduced. The paper demonstrates that thermal gradients coupled with material incompatibilities may be important in the generation of stress in nuclear graphite reactor bricks. Tiny spatial variations in coefficient of thermal expansion (CTE) and Young's modulus can lead to the presence of thermal stresses in bricks that are free to expand. - Highlights: • Open source software has been modified to include random variability in CTE and Young's modulus. • The new software closely agrees with analytical solutions and commercial software. • Spatial variations in CTE and Young's modulus produce stresses that do not occur with mean values. • Material variability may induce pre-service stress in virgin graphite.

6. Gravitational lensing by eigenvalue distributions of random matrix models

Science.gov (United States)

Martínez Alonso, Luis; Medina, Elena

2018-05-01

We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

7. Random resistor network model of minimal conductivity in graphene.

Science.gov (United States)

Cheianov, Vadim V; Fal'ko, Vladimir I; Altshuler, Boris L; Aleiner, Igor L

2007-10-26

Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

8. Levy Random Bridges and the Modelling of Financial Information

OpenAIRE

Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea

2009-01-01

The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...

9. Social aggregation in pea aphids: experiment and random walk modeling.

Directory of Open Access Journals (Sweden)

Christa Nilsen

Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

10. Modelling the impact of increasing soil sealing on runoff coefficients at regional scale: a hydropedological approach

Directory of Open Access Journals (Sweden)

Ungaro Fabrizio

2014-03-01

Full Text Available Soil sealing is the permanent covering of the land surface by buildings, infrastructures or any impermeable artificial material. Beside the loss of fertile soils with a direct impact on food security, soil sealing modifies the hydrological cycle. This can cause an increased flooding risk, due to urban development in potential risk areas and to the increased volumes of runoff. This work estimates the increase of runoff due to sealing following urbanization and land take in the plain of Emilia Romagna (Italy, using the Green and Ampt infiltration model for two rainfall return periods (20 and 200 years in two different years, 1976 and 2008. To this goal a hydropedological approach was adopted in order to characterize soil hydraulic properties via locally calibrated pedotransfer functions (PTF. PTF inputs were estimated via sequential Gaussian simulations coupled with a simple kriging with varying local means, taking into account soil type and dominant land use. Results show that in the study area an average increment of 8.4% in sealed areas due to urbanization and sprawl induces an average increment in surface runoff equal to 3.5 and 2.7% respectively for 20 and 200-years return periods, with a maximum > 20% for highly sealed coast areas.

11. The mathematical modeling of the experiment on the determination of correlation coefficients in neutron beta-decay

Science.gov (United States)

Serebrov, A. P.; Zherebtsov, O. M.; Klyushnikov, G. N.

2018-05-01

An experiment on the measurement of the ratio of the axial coupling constant to the vector one is under development. The main idea of the experiment is to measure the values of A and B in the same setup. An additional measurement of the polarization is not necessary. The accuracy achieved to date in measuring λ is 2 × 10-3. It is expected that in the experiment the accuracy will be of the order of 10-4. Some particular problems of mathematical modeling concerning the experiment on the measurement of the ratio of the axial coupling constant to the vector one are considered. The force lines for the given tabular field of a magnetic trap are studied. The dependences of the longitudinal and transverse field non-uniformity coefficients on the coordinates are regarded. A special computational algorithm based on the law of a charged particle motion along a local magnetic force line is performed for the calculation of the electrons and protons motion time as well as for the evaluation of the total number of electrons colliding with the detector surface. The average values of the cosines of the angles with the coefficients of a, A and B have been estimated.

12. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

Science.gov (United States)

Shalizi, Cosma Rohilla; Rinaldo, Alessandro

2013-04-01

The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

13. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

International Nuclear Information System (INIS)

Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan

2010-01-01

The algorithms in the literature focusing to predict tissue:blood PC (P tb ) for environmental chemicals and tissue:plasma PC based on total (K p ) or unbound concentration (K pu ) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P tb , K p and K pu for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such a way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P tb , K p or K pu of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.

14. Simulation of temporal and spatial distribution of required irrigation water by crop models and the pan evaporation coefficient method

Science.gov (United States)

Yang, Yan-min; Yang, Yonghui; Han, Shu-min; Hu, Yu-kun

2009-07-01

Hebei Plain is the most important agricultural belt in North China. Intensive irrigation, low and uneven precipitation have led to severe water shortage on the plain. This study is an attempt to resolve this crucial issue of water shortage for sustainable agricultural production and water resources management. The paper models distributed regional irrigation requirement for a range of cultivated crops on the plain. Classic crop models like DSSAT- wheat/maize and COTTON2K are used in combination with pan-evaporation coefficient method to estimate water requirements for wheat, corn, cotton, fruit-trees and vegetables. The approach is more accurate than the static approach adopted in previous studies. This is because the combination use of crop models and pan-evaporation coefficient method dynamically accounts for irrigation requirement at different growth stages of crops, agronomic practices, and field and climatic conditions. The simulation results show increasing Required Irrigation Amount (RIA) with time. RIA ranges from 5.08×109 m3 to 14.42×109 m3 for the period 1986~2006, with an annual average of 10.6×109 m3. Percent average water use by wheat, fruit trees, vegetable, corn and cotton is 41%, 12%, 12%, 11%, 7% and 17% respectively. RIA for April and May (the period with the highest irrigation water use) is 1.78×109 m3 and 2.41×109 m3 respectively. The counties in the piedmont regions of Mount Taihang have high RIA while the central and eastern regions/counties have low irrigation requirement.

15. QSAR models for predicting octanol/water and organic carbon/water partition coefficients of polychlorinated biphenyls.

Science.gov (United States)

Yu, S; Gao, S; Gan, Y; Zhang, Y; Ruan, X; Wang, Y; Yang, L; Shi, J

2016-04-01

Quantitative structure-property relationship modelling can be a valuable alternative method to replace or reduce experimental testing. In particular, some endpoints such as octanol-water (KOW) and organic carbon-water (KOC) partition coefficients of polychlorinated biphenyls (PCBs) are easier to predict and various models have been already developed. In this paper, two different methods, which are multiple linear regression based on the descriptors generated using Dragon software and hologram quantitative structure-activity relationships, were employed to predict suspended particulate matter (SPM) derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of 209 PCBs. The predictive ability of the derived models was validated using a test set. The performances of all these models were compared with EPI Suite™ software. The results indicated that the proposed models were robust and satisfactory, and could provide feasible and promising tools for the rapid assessment of the SPM derived log KOC and generator column, shake flask and slow stirring method derived log KOW values of PCBs.

16. High-temperature series expansions for random Potts models

Directory of Open Access Journals (Sweden)

M.Hellmund

2005-01-01

Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.

17. On a Stochastic Failure Model under Random Shocks

Science.gov (United States)

Cha, Ji Hwan

2013-02-01

In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

18. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

Science.gov (United States)

Cheung, Mike W.-L.; Cheung, Shu Fai

2016-01-01

Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

19. Discrete random walk models for space-time fractional diffusion

International Nuclear Information System (INIS)

Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo

2002-01-01

A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation

20. Random matrices and the six-vertex model

CERN Document Server

Bleher, Pavel

2013-01-01

This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...

1. Wear-dependent specific coefficients in a mechanistic model for turning of nickel-based superalloy with ceramic tools

Science.gov (United States)

López de Lacalle, Luis Norberto; Urbicain Pelayo, Gorka; Fernández-Valdivielso, Asier; Alvarez, Alvaro; González, Haizea

2017-09-01

Difficult to cut materials such as nickel and titanium alloys are used in the aeronautical industry, the former alloys due to its heat-resistant behavior and the latter for the low weight - high strength ratio. Ceramic tools made out alumina with reinforce SiC whiskers are a choice in turning for roughing and semifinishing workpiece stages. Wear rate is high in the machining of these alloys, and consequently cutting forces tends to increase along one operation. This paper establishes the cutting force relation between work-piece and tool in the turning of such difficult-to-cut alloys by means of a mechanistic cutting force model that considers the tool wear effect. The cutting force model demonstrates the force sensitivity to the cutting engagement parameters (ap, f) when using ceramic inserts and wear is considered. Wear is introduced through a cutting time factor, being useful in real conditions taking into account that wear quickly appears in alloys machining. A good accuracy in the cutting force model coefficients is the key issue for an accurate prediction of turning forces, which could be used as criteria for tool replacement or as input for chatter or other models.

2. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

Science.gov (United States)

Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

2009-01-01

Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

3. Universality of correlation functions in random matrix models of QCD

International Nuclear Information System (INIS)

Jackson, A.D.; Sener, M.K.; Verbaarschot, J.J.M.

1997-01-01

We demonstrate the universality of the spectral correlation functions of a QCD inspired random matrix model that consists of a random part having the chiral structure of the QCD Dirac operator and a deterministic part which describes a schematic temperature dependence. We calculate the correlation functions analytically using the technique of Itzykson-Zuber integrals for arbitrary complex supermatrices. An alternative exact calculation for arbitrary matrix size is given for the special case of zero temperature, and we reproduce the well-known Laguerre kernel. At finite temperature, the microscopic limit of the correlation functions are calculated in the saddle-point approximation. The main result of this paper is that the microscopic universality of correlation functions is maintained even though unitary invariance is broken by the addition of a deterministic matrix to the ensemble. (orig.)

4. Nonparametric Estimation of Distributions in Random Effects Models

KAUST Repository

Hart, Jeffrey D.

2011-01-01

We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

5. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

Energy Technology Data Exchange (ETDEWEB)

Abrahamsen, Petter

1997-12-31

During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

6. Pervasive randomness in physics: an introduction to its modelling and spectral characterisation

Science.gov (United States)

Howard, Roy

2017-10-01

An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.

7. Statistical Downscaling of Temperature with the Random Forest Model

Directory of Open Access Journals (Sweden)

Bo Pang

2017-01-01

Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.

8. Randomizing growing networks with a time-respecting null model

Science.gov (United States)

Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš

2018-05-01

Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.

9. Genetic evaluation of European quails by random regression models

Directory of Open Access Journals (Sweden)

Flaviana Miranda Gonçalves

2012-09-01

Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

10. Determination of equilibrium electron temperature and times using an electron swarm model with BOLSIG+ calculated collision frequencies and rate coefficients

International Nuclear Information System (INIS)

Pusateri, Elise N.; Morris, Heidi E.; Nelson, Eric M.; Ji, Wei

2015-01-01

Electromagnetic pulse (EMP) events produce low-energy conduction electrons from Compton electron or photoelectron ionizations with air. It is important to understand how conduction electrons interact with air in order to accurately predict EMP evolution and propagation. An electron swarm model can be used to monitor the time evolution of conduction electrons in an environment characterized by electric field and pressure. Here a swarm model is developed that is based on the coupled ordinary differential equations (ODEs) described by Higgins et al. (1973), hereinafter HLO. The ODEs characterize the swarm electric field, electron temperature, electron number density, and drift velocity. Important swarm parameters, the momentum transfer collision frequency, energy transfer collision frequency, and ionization rate, are calculated and compared to the previously reported fitted functions given in HLO. These swarm parameters are found using BOLSIG+, a two term Boltzmann solver developed by Hagelaar and Pitchford (2005), which utilizes updated cross sections from the LXcat website created by Pancheshnyi et al. (2012). We validate the swarm model by comparing to experimental effective ionization coefficient data in Dutton (1975) and drift velocity data in Ruiz-Vargas et al. (2010). In addition, we report on electron equilibrium temperatures and times for a uniform electric field of 1 StatV/cm for atmospheric heights from 0 to 40 km. We show that the equilibrium temperature and time are sensitive to the modifications in the collision frequencies and ionization rate based on the updated electron interaction cross sections

11. The Best Model of the Swiss Banknote Data -Validation by the 95% CI of coefficients and t-test of discriminant scores

Directory of Open Access Journals (Sweden)

Shuichi Shinmura

2016-06-01

Full Text Available The discriminant analysis is not the inferential statistics since there are no equations for standard error (SE of error rate and discriminant coefficient based on the normal distribution. In this paper, we proposed the “k-fold cross validation for small sample” and can obtain the 95% confidence interval (CI of error rates and discriminant coefficients. This method is the computer-intensive approach by statistical and mathematical programming (MP software such as JMP and LINGO. By the proposed approach, we can choose the best model with the minimum mean of error rate in the validation samples (Minimum M2 Standard. In this research, we examine the sixteen linear separable models of Swiss banknote data by eight linear discriminant functions (LDFs. M2 of the best model of Revised IP-OLDF is the smallest value of all models. We find all coefficients of six Revised IP-OLDF among sixteen models rejected by the 95% CI of discriminant coefficients (Discriminant coefficient standard. We compare t-values of the discriminant scores. The t-value of the best model has the maximum values among sixteen models (Maximum t-value Standard. Moreover, we can conclude that all standards support the best model of Revised IP-OLDF.

12. A varying coefficient model to measure the effectiveness of mass media anti-smoking campaigns in generating calls to a Quitline.

Science.gov (United States)

Bui, Quang M; Huggins, Richard M; Hwang, Wen-Han; White, Victoria; Erbas, Bircan

2010-01-01

Anti-smoking advertisements are an effective population-based smoking reduction strategy. The Quitline telephone service provides a first point of contact for adults considering quitting. Because of data complexity, the relationship between anti-smoking advertising placement, intensity, and time trends in total call volume is poorly understood. In this study we use a recently developed semi-varying coefficient model to elucidate this relationship. Semi-varying coefficient models comprise parametric and nonparametric components. The model is fitted to the daily number of calls to Quitline in Victoria, Australia to estimate a nonparametric long-term trend and parametric terms for day-of-the-week effects and to clarify the relationship with target audience rating points (TARPs) for the Quit and nicotine replacement advertising campaigns. The number of calls to Quitline increased with the TARP value of both the Quit and other smoking cessation advertisement; the TARP values associated with the Quit program were almost twice as effective. The varying coefficient term was statistically significant for peak periods with little or no advertising. Semi-varying coefficient models are useful for modeling public health data when there is little or no information on other factors related to the at-risk population. These models are well suited to modeling call volume to Quitline, because the varying coefficient allowed the underlying time trend to depend on fixed covariates that also vary with time, thereby explaining more of the variation in the call model.

13. Zero temperature landscape of the random sine-Gordon model

International Nuclear Information System (INIS)

Sanchez, A.; Bishop, A.R.; Cai, D.

1997-01-01

We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length (∼ 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r) 2 type up to a certain correlation length (ξ ∼ 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis

14. Exponential random graph models for networks with community structure.

Science.gov (United States)

Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian

2013-09-01

Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.

15. The Little-Hopfield model on a sparse random graph

International Nuclear Information System (INIS)

Castillo, I Perez; Skantzos, N S

2004-01-01

We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model). We solve this model within replica symmetry, and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetic transition lines of our phase diagram are identical to those of sequential dynamics. The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement

16. Random isotropic one-dimensional XY-model

Science.gov (United States)

Gonçalves, L. L.; Vieira, A. P.

1998-01-01

The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

17. Estimating safety effects of pavement management factors utilizing Bayesian random effect models.

Science.gov (United States)

Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong

2013-01-01

Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic

18. Measurement and modeling of osmotic coefficients of binary mixtures (alcohol + 1,3-dimethylpyridinium methylsulfate) at T = 323.15 K

International Nuclear Information System (INIS)

Gomez, Elena; Calvar, Noelia; Dominguez, Angeles; Macedo, Eugenia A.

2011-01-01

Research highlights: → The osmotic coefficients of binary mixtures (alcohol + ionic liquid) were determined. → The measurements were carried out with a vapor pressure osmometer at 323.15 K. → The Pitzer-Archer, and the MNRTL models were used to correlate the experimental data. → Mean molal activity coefficients and excess Gibbs free energies were calculated. - Abstract: Measurement of osmotic coefficients of binary mixtures containing several primary and secondary alcohols (1-propanol, 2-propanol, 1-butanol, 2-butanol, and 1-pentanol) and the pyridinium-based ionic liquid 1,3-dimethylpyridinium methylsulfate were performed at T = 323.15 K using the vapor pressure osmometry technique, and from experimental data, vapor pressure, and activity coefficients were determined. The extended Pitzer model modified by Archer, and the NRTL model modified by Jaretun and Aly (MNRTL) were used to correlate the experimental osmotic coefficients, obtaining standard deviations lower than 0.017 and 0.054, respectively. From the parameters obtained with the extended Pitzer model modified by Archer, the mean molal activity coefficients and the excess Gibbs free energy for the studied binary mixtures were calculated. The effect of the cation is studied comparing the experimental results with those obtained for the ionic liquid 1,3-dimethylimidazolium methylsulfate.

19. Prediction on Human Resource Supply/Demand in Nuclear Industry Using Markov Chains Model and Job Coefficient

Energy Technology Data Exchange (ETDEWEB)

Kwon, Hyuk; Min, Byung Joo; Lee, Eui Jin; You, Byung Hoon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

2006-07-01

According to the recent report by the OECD/NEA, there is a large imbalance between supply and demand of human resource in nuclear field. In the U.S., according to survey of Nuclear Engineering Department Heads Organization (NEDHO), 174 graduates in B.S or M.S degree were fed to nuclear industry in year 2004. Meanwhile, the total amount of demand in nuclear industry was about 642 engineers, which was approximately three times of the supply. In case of other developed western nations, the OECD/NEA report stated that the level of imbalance is similar to that of the U.S. However, nations having nuclear power development programs such as Korea, Japan and France seem to be in a different environment of supply and demand from that of the U.S. In this study, the difference of manpower status between the U.S and Korea has been investigated and the nuclear manpower required for the future in Korea is predicted. To investigate the factors making difference between the U.S. and NPP developing countries including Korea, a quantitative manpower planning model, Markov chains model, is applied. Since the Markov chains model has the strength of analyzing an inflow or push structure, the model fits the system governed by the inflow of manpower. A macroscopic status of manpower demand on nuclear industry is calculated up to 2015 using the Job coefficient (JC) and GDP, which are derived from the Survey for Roadmap of Electric Power Industry Manpower Planning. Furthermore, the total numbers of required manpower and supplied manpower up to 2030 were predicted by JC and Markov Chains model, respectively. Whereas the employee status of nuclear industries has been annually investigated by KAIF since 1995, the following data from the 10{sup th} survey and nuclear energy yearbooks from 1998 to 2005 are applied; (a) the status of the manpower demand of industry, (b) number of students entering, graduating and getting job in nuclear engineering.

20. Prediction on Human Resource Supply/Demand in Nuclear Industry Using Markov Chains Model and Job Coefficient

International Nuclear Information System (INIS)

Kwon, Hyuk; Min, Byung Joo; Lee, Eui Jin; You, Byung Hoon

2006-01-01

According to the recent report by the OECD/NEA, there is a large imbalance between supply and demand of human resource in nuclear field. In the U.S., according to survey of Nuclear Engineering Department Heads Organization (NEDHO), 174 graduates in B.S or M.S degree were fed to nuclear industry in year 2004. Meanwhile, the total amount of demand in nuclear industry was about 642 engineers, which was approximately three times of the supply. In case of other developed western nations, the OECD/NEA report stated that the level of imbalance is similar to that of the U.S. However, nations having nuclear power development programs such as Korea, Japan and France seem to be in a different environment of supply and demand from that of the U.S. In this study, the difference of manpower status between the U.S and Korea has been investigated and the nuclear manpower required for the future in Korea is predicted. To investigate the factors making difference between the U.S. and NPP developing countries including Korea, a quantitative manpower planning model, Markov chains model, is applied. Since the Markov chains model has the strength of analyzing an inflow or push structure, the model fits the system governed by the inflow of manpower. A macroscopic status of manpower demand on nuclear industry is calculated up to 2015 using the Job coefficient (JC) and GDP, which are derived from the Survey for Roadmap of Electric Power Industry Manpower Planning. Furthermore, the total numbers of required manpower and supplied manpower up to 2030 were predicted by JC and Markov Chains model, respectively. Whereas the employee status of nuclear industries has been annually investigated by KAIF since 1995, the following data from the 10 th survey and nuclear energy yearbooks from 1998 to 2005 are applied; (a) the status of the manpower demand of industry, (b) number of students entering, graduating and getting job in nuclear engineering

1. The random cluster model and a new integration identity

International Nuclear Information System (INIS)

Chen, L C; Wu, F Y

2005-01-01

We evaluate the free energy of the random cluster model at its critical point for 0 -1 (√q/2) is a rational number. As a by-product, our consideration leads to a closed-form evaluation of the integral 1/(4π 2 ) ∫ 0 2π dΘ ∫ 0 2π dΦ ln[A+B+C - AcosΘ - BcosΦ - Ccos(Θ+Φ)] = -ln(2S) + (2/π)[Ti 2 (AS) + Ti 2 (BS) + Ti 2 (CS)], which arises in lattice statistics, where A, B, C ≥ 0 and S=1/√(AB + BC + CA)

2. Universality in random-walk models with birth and death

International Nuclear Information System (INIS)

Bender, C.M.; Boettcher, S.; Meisinger, P.N.

1995-01-01

Models of random walks are considered in which walkers are born at one site and die at all other sites. Steady-state distributions of walkers exhibit dimensionally dependent critical behavior as a function of the birth rate. Exact analytical results for a hyperspherical lattice yield a second-order phase transition with a nontrivial critical exponent for all positive dimensions D≠2, 4. Numerical studies of hypercubic and fractal lattices indicate that these exact results are universal. This work elucidates the adsorption transition of polymers at curved interfaces. copyright 1995 The American Physical Society

3. Interpreting parameters in the logistic regression model with random effects

DEFF Research Database (Denmark)

Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

2000-01-01

interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

4. Geometric Models for Isotropic Random Porous Media: A Review

Directory of Open Access Journals (Sweden)

Helmut Hermann

2014-01-01

Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.

5. Rigorously testing multialternative decision field theory against random utility models.

Science.gov (United States)

Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

2014-06-01

Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

6. Gaussian random bridges and a geometric model for information equilibrium

Science.gov (United States)

Mengütürk, Levent Ali

2018-03-01

The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

7. Calculation of Longitudinal Dispersion Coefficient and Modeling the Pollution Transmission in Rivers (Case studies: Severn and Narew Rivers

Directory of Open Access Journals (Sweden)

A. Parsaie

2017-01-01

empirical formulas and artificial intelligent techniques have been proposed. In this study LDC is calculated for the Severn River and Narew River and some selected empirical formulas have been assessed to calculate the LDC. Dispersion Routing Method: As mentioned previously, calculating the LDC is more important, so firstly, the longitudinal dispersion was calculated from the concentration profile by Dispersion Routing Method (DRM. Using the DRM included the four stage.1-considering of initial value for LDC .2-calculating the concentration profile at the downstream station by using the upstream concentration profile and LDC.3- Performing a comparison between the calculated profile and measured profile.4- if the calculating profile is not a suitable cover, the measured profile of the process will be repeated until the calculated profile shows a good covering on the measured profile. Numerical Method: The ADE includes two different parts advection and dispersion. The pure advection term is related to transmission modeling without any dispersing and the dispersion term is related to the dispersion without any transmission. To discrete the ADE the finite volume method was used. According to physical properties of these two terms and the recommendation of researchers a suitable scheme should be considered for numerical solution of ADE terms. Among the finite volume schemes, the quickest scheme was selected to discrete the advection term, because of this scheme has suitable ability to model the pure advection term. The quickest scheme is an explicit scheme and the stability condition should be considered. To discrete the dispersion term, the central implicit scheme was selected. This scheme is unconditionally stable. Results and Discussion: The results of longitudinal dispersion coefficient for the Severn River and Narew River were calculated using the DRM method and empirical formulas. The results of LDC calculation showed that the minimum and maximum values for the Severn River

8. Separation of very hydrophobic analytes by micellar electrokinetic chromatography IV. Modeling of the effective electrophoretic mobility from carbon number equivalents and octanol-water partition coefficients.

Science.gov (United States)

Huhn, Carolin; Pyell, Ute

2008-07-11

It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.

9. Non-linear Bayesian update of PCE coefficients

KAUST Repository

Litvinenko, Alexander

2014-01-06

Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).

10. Non-linear Bayesian update of PCE coefficients

KAUST Repository

Litvinenko, Alexander; Matthies, Hermann G.; Pojonk, Oliver; Rosic, Bojana V.; Zander, Elmar

2014-01-01

Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).

11. A Generalized Spatial Correlation Model for 3D MIMO Channels based on the Fourier Coefficients of Power Spectrums

KAUST Repository

2015-05-07

Previous studies have confirmed the adverse impact of fading correlation on the mutual information (MI) of two-dimensional (2D) multiple-input multiple-output (MIMO) systems. More recently, the trend is to enhance the system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the derivation and characterization of three-dimensional (3D) channels in the presence of spatial correlation. In this paper, an exact closed-form expression for the Spatial Correlation Function (SCF) is derived for 3D MIMO channels. This novel SCF is developed for a uniform linear array of antennas with nonisotropic antenna patterns. The proposed method resorts to the spherical harmonic expansion (SHE) of plane waves and the trigonometric expansion of Legendre and associated Legendre polynomials. The resulting expression depends on the underlying arbitrary angular distributions and antenna patterns through the Fourier Series (FS) coefficients of power azimuth and elevation spectrums. The novelty of the proposed method lies in the SCF being valid for any 3D propagation environment. The developed SCF determines the covariance matrices at the transmitter and the receiver that form the Kronecker channel model. In order to quantify the effects of correlation on the system performance, the information-theoretic deterministic equivalents of the MI for the Kronecker model are utilized in both mono-user and multi-user cases. Numerical results validate the proposed analytical expressions and elucidate the dependence of the system performance on azimuth and elevation angular spreads and antenna patterns. Some useful insights into the behaviour of MI as a function of downtilt angles are provided. The derived model will help evaluate the performance of correlated 3D MIMO channels in the future.

12. Prediction of time-integrated activity coefficients in PRRT using simulated dynamic PET and a pharmacokinetic model.

Science.gov (United States)

Hardiansyah, Deni; Attarwala, Ali Asgar; Kletting, Peter; Mottaghy, Felix M; Glatting, Gerhard

2017-10-01

To investigate the accuracy of predicted time-integrated activity coefficients (TIACs) in peptide-receptor radionuclide therapy (PRRT) using simulated dynamic PET data and a physiologically based pharmacokinetic (PBPK) model. PBPK parameters were estimated using biokinetic data of 15 patients after injection of (152±15)MBq of 111 In-DTPAOC (total peptide amount (5.78±0.25)nmol). True mathematical phantoms of patients (MPPs) were the PBPK model with the estimated parameters. Dynamic PET measurements were simulated as being done after bolus injection of 150MBq 68 Ga-DOTATATE using the true MPPs. Dynamic PET scans around 35min p.i. (P 1 ), 4h p.i. (P 2 ) and the combination of P 1 and P 2 (P 3 ) were simulated. Each measurement was simulated with four frames of 5min each and 2 bed positions. PBPK parameters were fitted to the PET data to derive the PET-predicted MPPs. Therapy was simulated assuming an infusion of 5.1GBq of 90 Y-DOTATATE over 30min in both true and PET-predicted MPPs. TIACs of simulated therapy were calculated, true MPPs (true TIACs) and predicted MPPs (predicted TIACs) followed by the calculation of variabilities v. For P 1 and P 2 the population variabilities of kidneys, liver and spleen were acceptable (v10%). Treatment planning of PRRT based on dynamic PET data seems possible for the kidneys, liver and spleen using a PBPK model and patient specific information. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

13. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

Science.gov (United States)

Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

2018-03-01

A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

14. Reconstructing gene regulatory networks from knock-out data using Gaussian Noise Model and Pearson Correlation Coefficient.

Science.gov (United States)

Mohamed Salleh, Faridah Hani; Arif, Shereena Mohd; Zainudin, Suhaila; Firdaus-Raih, Mohd

2015-12-01

A gene regulatory network (GRN) is a large and complex network consisting of interacting elements that, over time, affect each other's state. The dynamics of complex gene regulatory processes are difficult to understand using intuitive approaches alone. To overcome this problem, we propose an algorithm for inferring the regulatory interactions from knock-out data using a Gaussian model combines with Pearson Correlation Coefficient (PCC). There are several problems relating to GRN construction that have been outlined in this paper. We demonstrated the ability of our proposed method to (1) predict the presence of regulatory interactions between genes, (2) their directionality and (3) their states (activation or suppression). The algorithm was applied to network sizes of 10 and 50 genes from DREAM3 datasets and network sizes of 10 from DREAM4 datasets. The predicted networks were evaluated based on AUROC and AUPR. We discovered that high false positive values were generated by our GRN prediction methods because the indirect regulations have been wrongly predicted as true relationships. We achieved satisfactory results as the majority of sub-networks achieved AUROC values above 0.5. Copyright © 2015 Elsevier Ltd. All rights reserved.

15. On a model for the prediction of the friction coefficient in mixed lubrication based on a load-sharing concapt with measured surface roughness

NARCIS (Netherlands)

Akchurin, Aydar; Bosman, Rob; Lugt, Pieter Martin; van Drogen, Mark

2015-01-01

A new model was developed for the simulation of the friction coefficient in lubricated sliding line contacts. A half-space-based contact algorithm was linked with a numerical elasto-hydrodynamic lubrication solver using the load-sharing concept. The model was compared with an existing asperity-based

16. Joint modeling of ChIP-seq data via a Markov random field model

NARCIS (Netherlands)

Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for

17. An improved export coefficient model to estimate non-point source phosphorus pollution risks under complex precipitation and terrain conditions.

Science.gov (United States)

Cheng, Xian; Chen, Liding; Sun, Ranhao; Jing, Yongcai

2018-05-15

To control non-point source (NPS) pollution, it is important to estimate NPS pollution exports and identify sources of pollution. Precipitation and terrain have large impacts on the export and transport of NPS pollutants. We established an improved export coefficient model (IECM) to estimate the amount of agricultural and rural NPS total phosphorus (TP) exported from the Luanhe River Basin (LRB) in northern China. The TP concentrations of rivers from 35 selected catchments in the LRB were used to test the model's explanation capacity and accuracy. The simulation results showed that, in 2013, the average TP export was 57.20 t at the catchment scale. The mean TP export intensity in the LRB was 289.40 kg/km 2 , which was much higher than those of other basins in China. In the LRB topographic regions, the TP export intensity was the highest in the south Yanshan Mountains and was followed by the plain area, the north Yanshan Mountains, and the Bashang Plateau. Among the three pollution categories, the contribution ratios to TP export were, from high to low, the rural population (59.44%), livestock husbandry (22.24%), and land-use types (18.32%). Among all ten pollution sources, the contribution ratios from the rural population (59.44%), pigs (14.40%), and arable land (10.52%) ranked as the top three sources. This study provides information that decision makers and planners can use to develop sustainable measures for the prevention and control of NPS pollution in semi-arid regions.

18. Bayesian Hierarchical Random Effects Models in Forensic Science

Directory of Open Access Journals (Sweden)

Colin G. G. Aitken

2018-04-01

Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

19. Bayesian Hierarchical Random Effects Models in Forensic Science.

Science.gov (United States)

Aitken, Colin G G

2018-01-01

Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

20. Percolation for a model of statistically inhomogeneous random media

International Nuclear Information System (INIS)

Quintanilla, J.; Torquato, S.

1999-01-01

We study clustering and percolation phenomena for a model of statistically inhomogeneous two-phase random media, including functionally graded materials. This model consists of inhomogeneous fully penetrable (Poisson distributed) disks and can be constructed for any specified variation of volume fraction. We quantify the transition zone in the model, defined by the frontier of the cluster of disks which are connected to the disk-covered portion of the model, by defining the coastline function and correlation functions for the coastline. We find that the behavior of these functions becomes largely independent of the specific choice of grade in volume fraction as the separation of length scales becomes large. We also show that the correlation function behaves in a manner similar to that of fractal Brownian motion. Finally, we study fractal characteristics of the frontier itself and compare to similar properties for two-dimensional percolation on a lattice. In particular, we show that the average location of the frontier appears to be related to the percolation threshold for homogeneous fully penetrable disks. copyright 1999 American Institute of Physics

1. Hedonic travel cost and random utility models of recreation

Energy Technology Data Exchange (ETDEWEB)

Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

1998-07-09

Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

2. Droplet localization in the random XXZ model and its manifestations

Science.gov (United States)

Elgart, A.; Klein, A.; Stolz, G.

2018-01-01

We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb-Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.

3. [Critical of the additive model of the randomized controlled trial].

Science.gov (United States)

Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

2008-01-01

Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

4. Stochastic equilibria of an asset pricing model with heterogeneous beliefs and random dividends

NARCIS (Netherlands)

Zhu, M.; Wang, D.; Guo, M.

2011-01-01

We investigate dynamical properties of a heterogeneous agent model with random dividends and further study the relationship between dynamical properties of the random model and those of the corresponding deterministic skeleton, which is obtained by setting the random dividends as their constant mean

5. Multiscale model of short cracks in a random polycrystalline aggregate

International Nuclear Information System (INIS)

Simonovski, I.; Cizelj, L.; Petric, Z.

2006-01-01

A plane-strain finite element crystal plasticity model of microstructurally small stationary crack emanating at a surface grain in a 316L stainless steel is proposed. The model consisting of 212 randomly shaped, sized and oriented grains is loaded monotonically in uniaxial tension to a maximum load of 1.12Rp0.2 (280MPa). The influence that a random grain structure imposes on a Stage I crack is assessed by calculating the crack tip opening (CTOD) and sliding displacements (CTSD) for single crystal as well as for polycrystal models, considering also different crystallographic orientations. In the single crystal case the CTOD and CTSD may differ by more than one order of magnitude. Near the crack tip slip is activated on all the slip planes whereby only two are active in the rest of the model. The maximum CTOD is directly related to the maximal Schmid factors. For the more complex polycrystal cases it is shown that certain crystallographic orientations result in a cluster of soft grains around the crack-containing grain. In these cases the crack tip can become a part of the localized strain, resulting in a large CTOD value. This effect, resulting from the overall grain orientations and sizes, can have a greater impact on the CTOD than the local grain orientation. On the other hand, when a localized soft response is formed away from the crack, the localized strain does not affect the crack tip directly, resulting in a small CTOD value. The resulting difference in CTOD can be up to a factor of 4, depending upon the crystallographic set. Grains as far as 6 times the value of crack length significantly influence that crack tip parameters. It was also found that a larger crack containing grain tends to increase the CTOD. Finally, smaller than expected drop in the CTOD (12.7%) was obtained as the crack approached the grain boundary. This could be due to the assumption of the unchanged crack direction, only monotonic loading and simplified grain boundary modelling. (author)

6. Measurement model choice influenced randomized controlled trial results.

Science.gov (United States)

Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

2016-11-01

In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright Â© 2016 Elsevier Inc. All rights reserved.

7. Hindrance Velocity Model for Phase Segregation in Suspensions of Poly-dispersed Randomly Oriented Spheroids

Science.gov (United States)

Faroughi, S. A.; Huber, C.

2015-12-01

Crystal settling and bubbles migration in magmas have significant effects on the physical and chemical evolution of magmas. The rate of phase segregation is controlled by the force balance that governs the migration of particles suspended in the melt. The relative velocity of a single particle or bubble in a quiescent infinite fluid (melt) is well characterized; however, the interplay between particles or bubbles in suspensions and emulsions and its effect on their settling/rising velocity remains poorly quantified. We propose a theoretical model for the hindered velocity of non-Brownian emulsions of nondeformable droplets, and suspensions of spherical solid particles in the creeping flow regime. The model is based on three sets of hydrodynamic corrections: two on the drag coefficient experienced by each particle to account for both return flow and Smoluchowski effects and a correction on the mixture rheology to account for nonlocal interactions between particles. The model is then extended for mono-disperse non-spherical solid particles that are randomly oriented. The non-spherical particles are idealized as spheroids and characterized by their aspect ratio. The poly-disperse nature of natural suspensions is then taken into consideration by introducing an effective volume fraction of particles for each class of mono-disperse particles sizes. Our model is tested against new and published experimental data over a wide range of particle volume fraction and viscosity ratios between the constituents of dispersions. We find an excellent agreement between our model and experiments. We also show two significant applications for our model: (1) We demonstrate that hindered settling can increase mineral residence time by up to an order of magnitude in convecting magma chambers. (2) We provide a model to correct for particle interactions in the conventional hydrometer test to estimate the particle size distribution in soils. Our model offers a greatly improved agreement with

8. Ion Exchange Distribution Coefficient Tests and Computer Modeling at High Ionic Strength Supporting Technetium Removal Resin Maturation

Energy Technology Data Exchange (ETDEWEB)

Nash, Charles A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, L. Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Smith, Frank G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); McCabe, Daniel J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

2014-12-19

The primary treatment of the tank waste at the DOE Hanford site will be done in the Waste Treatment and Immobilization Plant (WTP) that is currently under construction. The baseline plan for this facility is to treat the waste, splitting it into High Level Waste (HLW) and Low Activity Waste (LAW). Both waste streams are then separately vitrified as glass and poured into canisters for disposition. The LAW glass will be disposed onsite in the Integrated Disposal Facility (IDF). There are currently no plans to treat the waste to remove technetium, so its disposition path is the LAW glass. Due to the water solubility properties of pertechnetate and long half-life of 99Tc, effective management of 99Tc is important to the overall success of the Hanford River Protection Project mission. To achieve the full target WTP throughput, additional LAW immobilization capacity is needed, and options are being explored to immobilize the supplemental LAW portion of the tank waste. Removal of 99Tc, followed by off-site disposal, would eliminate a key risk contributor for the IDF Performance Assessment (PA) for supplemental waste forms, and has potential to reduce treatment and disposal costs. Washington River Protection Solutions (WRPS) is developing some conceptual flow sheets for supplemental LAW treatment and disposal that could benefit from technetium removal. One of these flowsheets will specifically examine removing 99Tc from the LAW feed stream to supplemental immobilization. To enable an informed decision regarding the viability of technetium removal, further maturation of available technologies is being performed. This report contains results of experimental ion exchange distribution coefficient testing and computer modeling using the resin SuperLig® 639a to selectively remove perrhenate from high ionic strength simulated LAW. It is advantageous to operate at higher concentration in order to treat the waste

9. Osmotic virial coefficients for model protein and colloidal solutions: Importance of ensemble constraints in the analysis of light scattering data

Science.gov (United States)

Siderius, Daniel W.; Krekelberg, William P.; Roberts, Christopher J.; Shen, Vincent K.

2012-05-01

Protein-protein interactions in solution may be quantified by the osmotic second virial coefficient (OSVC), which can be measured by various experimental techniques including light scattering. Analysis of Rayleigh light scattering measurements from such experiments requires identification of a scattering volume and the thermodynamic constraints imposed on that volume, i.e., the statistical mechanical ensemble in which light scattering occurs. Depending on the set of constraints imposed on the scattering volume, one can obtain either an apparent OSVC, A2,app, or the true thermodynamic OSVC, {B_{22}^{osm}}, that is rigorously defined in solution theory [M. A. Blanco, E. Sahin, Y. Li, and C. J. Roberts, J. Chem. Phys. 134, 225103 (2011), 10.1063/1.3596726]. However, it is unclear to what extent A2,app and {B_{22}^{osm}} differ, which may have implications on the physical interpretation of OSVC measurements from light scattering experiments. In this paper, we use the multicomponent hard-sphere model and a well-known equation of state to directly compare A2,app and {B_{22}^{osm}}. Our results from the hard-sphere equation of state indicate that A2,app underestimates {B_{22}^{osm}}, but in a systematic manner that may be explained using fundamental thermodynamic expressions for the two OSVCs. The difference between A2,app and {B_{22}^{osm}} may be quantitatively significant, but may also be obscured in experimental application by statistical uncertainty or non-steric interactions. Consequently, the two OSVCs that arise in the analysis of light scattering measurements do formally differ, but in a manner that may not be detectable in actual application.

10. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

Science.gov (United States)

Thomas, D.L.; Johnson, D.; Griffith, B.

2006-01-01

Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

11. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

Science.gov (United States)

Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

2017-12-01

Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data

12. Models for randomly distributed nanoscopic domains on spherical vesicles

Science.gov (United States)

Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John

2018-06-01

The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.

13. Coefficient Alpha: A Reliability Coefficient for the 21st Century?

Science.gov (United States)

Yang, Yanyun; Green, Samuel B.

2011-01-01

Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…

14. Premium Pricing of Liability Insurance Using Random Sum Model

OpenAIRE

Kartikasari, Mujiati Dwi

2017-01-01

Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we ...

15. A variable-coefficient unstable nonlinear Schroedinger model for the electron beam plasmas and Rayleigh-Taylor instability in nonuniform plasmas: Solutions and observable effects

International Nuclear Information System (INIS)

Gao Yitian; Tian Bo

2003-01-01

A variable-coefficient unstable nonlinear Schroedinger model is hereby investigated, which arises in such applications as the electron-beam plasma waves and Rayleigh-Taylor instability in nonuniform plasmas. With computerized symbolic computation, families of exact analytic dark- and bright-soliton-like solutions are found, of which some previously published solutions turn out to be the special cases. Similarity solutions also come out, which are expressible in terms of the elliptic functions and the second Painleve transcendent. Some observable effects caused by the variable coefficient are predicted, which may be detected in the future with the relevant space or laboratory plasma experiments with nonuniform background existing

16. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems.

Science.gov (United States)

Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P

1999-10-01

A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.

17. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems

International Nuclear Information System (INIS)

Garcia, F.; Manso, M.V.; Rodriguez, O.; Mesa, J.; Arruda-Neto, J.D.T.; Helene, O.M.; Vanin, V.R.; Likhachev, V.P.; Pereira Filho, J.W.; Deppman, A.; Perez, G.; Guzman, F.; Camargo, S.P. de

1999-01-01

A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data. (author)

18. Standard model Wilson coefficients for c → ul{sup +}l{sup -} transitions at next-to-leading order

Energy Technology Data Exchange (ETDEWEB)

Boer, Stefan de [TU Dortmund (Germany); Mueller, Bastian; Seidel, Dirk [Uni Siegen (Germany)

2016-07-01

The standard theoretical framework to deal with exclusive, weak decays of heavy mesons is the so-called weak effective Hamiltonian. It involves the short-distance Wilson coefficients, which depend on the renormalization scale μ. For specific calculations one has to evolve the Wilson coefficients down from the electroweak scale μ{sub W} to the typical mass scale of the decay under consideration. This is done by solving a renormalization group equation for the effective operator basis. In this talk the results of a consistent two-step running of the c → ul{sup +}l{sup -} Wilson coefficients are presented. This running involves the intermediate scale μ{sub b} (with μ{sub W} > μ{sub b} > μ{sub c}) where the bottom quark is integrated out. All the matching coefficients and anomalous dimensions are taken to the required order by generalizing and extending results from b → s or s → d transitions available in the literature.

19. Application of the resonating Hartree-Fock random phase approximation to the Lipkin model

International Nuclear Information System (INIS)

Nishiyama, S.; Ishida, K.; Ido, M.

1996-01-01

We have applied the resonating Hartree-Fock (Res-HF) approximation to the exactly solvable Lipkin model by utilizing a newly developed orbital-optimization algorithm. The Res-HF wave function was superposed by two Slater determinants (S-dets) which give two corresponding local energy minima of monopole ''deformations''. The self-consistent Res-HF calculation gives an excellent ground-state correlation energy. There exist excitations due to small vibrational fluctuations of the orbitals and mixing coefficients around their stationary values. They are described by a new approximation called the resonating Hartree-Fock random phase approximation (Res-HF RPA). Matrices of the second-order variation of the Res-HF energy have the same structures as those of the Res-HF RPA's matrices. The quadratic steepest descent of the Res-HF energy in the orbital optimization is considered to include certainly both effects of RPA-type fluctuations up to higher orders and their mode-mode couplings. It is a very important and interesting task to apply the Res-HF RPA to the Lipkin model with the use of the stationary values and to prove the above argument. It turns out that the Res-HF RPA works far better than the usual HF RPA and the renormalized one. We also show some important features of the Res-HF RPA. (orig.)

20. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

Science.gov (United States)

Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

2014-01-01

The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

1. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

Directory of Open Access Journals (Sweden)

Yu Guo

2014-01-01

Full Text Available The combination of positron emission tomography (PET and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice’s similarity coefficient (DSC was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

2. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

Science.gov (United States)

Can, Van Hao

2017-11-01

In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

3. Evaluation and management of the impact of land use change on the nitrogen and phosphorus load delivered to surface waters: the export coefficient modelling approach

Science.gov (United States)

Johnes, P. J.

1996-09-01

A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley.

4. Fluence to absorbed foetal dose conversion coefficients for photons in 50 keV-10 GeV calculated using RPI-P models

International Nuclear Information System (INIS)

Taranenko, V.; Xu, X.G.

2008-01-01

Radiation protection of pregnant females and the foetus against ionising radiation is of particular importance to radiation protection due to high foetal radiosensitivity. The only available set of foetal conversion coefficients for photons is based on stylised models of simplified anatomy. Using the RPI-P series of pregnant female and foetus models representing 3-, 6- and 9-month gestation, a set of new fluence to absorbed foetal dose conversion coefficients has been calculated. The RPI-P anatomical models were developed using novel 3D geometry modelling techniques. Organ masses were adjusted to agree within 1% with the ICRP reference data for a pregnant female. Monte Carlo dose calculations were carried out using the MCNPX and Penelope codes for external 50 keV-10 GeV photon beams of six standard configurations. The models were voxelised at 3-mm voxel resolution. Conversion coefficients were tabulated for the three gestational periods for the whole foetus and brain. Comparison with previously published data showed deviations up to 120% for the foetal doses at 50 keV. The discrepancy can be primarily ascribed to anatomical differences. Comparison with published data for five major mother organs is also provided for the 3-month model. Since the RPI-P models exhibit a high degree of anatomical realism, the reported dataset is recommended as a reference for radiation protection of the foetus against external photon exposure. (authors)

5. A new consideration for the heat transfer coefficient and an analysis of the thermal stress of the high-interim pressure turbine casing model

International Nuclear Information System (INIS)

Um, Dall Sun

2004-01-01

In real design of the high and interim pressure turbine casing, it is one of the important things to figure out its thermal strain exactly. In this paper, with the establishment of the new concept for the heat transfer coefficient of steam that is one of the factors in analysis of the thermal stress for turbine casing, an analysis was done for one of the high and interim pressure turbine casings in operating domestically. The sensitivity analysis of the heat transfer coefficient of steam to the thermal strain of the turbine casing was done with a 2-D simple model. The analysis was also done with switching of the material properties of the turbine casing and resulted in that the thermal strain of the turbine casing was not so sensitive to the heat transfer coefficient of steam. On the basis of this, 3-D analysis of the thermal strain for the high and interim pressure turbine casing was done

6. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

International Nuclear Information System (INIS)

Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.

2011-01-01

Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

7. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

Energy Technology Data Exchange (ETDEWEB)

Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)

2011-08-20

Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

8. Annealed central limit theorems for the ising model on random graphs

NARCIS (Netherlands)

Giardinà, C.; Giberti, C.; van der Hofstad, R.W.; Prioriello, M.L.

2016-01-01

The aim of this paper is to prove central limit theorems with respect to the annealed measure for the magnetization rescaled by √N of Ising models on random graphs. More precisely, we consider the general rank-1 inhomogeneous random graph (or generalized random graph), the 2-regular configuration

9. Force Limited Random Vibration Test of TESS Camera Mass Model

Science.gov (United States)

Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

2015-01-01

The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

10. 93-106, 2015 93 Multilevel random effect and marginal models

African Journals Online (AJOL)

Multilevel random effect and marginal models for longitudinal data ... and random effect models that take the correlation among measurements of the same subject ... comparing the level of redness, pain and irritability ... clinical trial evaluating the safety profile of a new .... likelihood-based methods to compare models and.

11. Real external predictivity of QSAR models: how to evaluate it? Comparison of different validation criteria and proposal of using the concordance correlation coefficient.

Science.gov (United States)

Chirico, Nicola; Gramatica, Paola

2011-09-26

The main utility of QSAR models is their ability to predict activities/properties for new chemicals, and this external prediction ability is evaluated by means of various validation criteria. As a measure for such evaluation the OECD guidelines have proposed the predictive squared correlation coefficient Q(2)(F1) (Shi et al.). However, other validation criteria have been proposed by other authors: the Golbraikh-Tropsha method, r(2)(m) (Roy), Q(2)(F2) (Schüürmann et al.), Q(2)(F3) (Consonni et al.). In QSAR studies these measures are usually in accordance, though this is not always the case, thus doubts can arise when contradictory results are obtained. It is likely that none of the aforementioned criteria is the best in every situation, so a comparative study using simulated data sets is proposed here, using threshold values suggested by the proponents or those widely used in QSAR modeling. In addition, a different and simple external validation measure, the concordance correlation coefficient (CCC), is proposed and compared with other criteria. Huge data sets were used to study the general behavior of validation measures, and the concordance correlation coefficient was shown to be the most restrictive. On using simulated data sets of a more realistic size, it was found that CCC was broadly in agreement, about 96% of the time, with other validation measures in accepting models as predictive, and in almost all the examples it was the most precautionary. The proposed concordance correlation coefficient also works well on real data sets, where it seems to be more stable, and helps in making decisions when the validation measures are in conflict. Since it is conceptually simple, and given its stability and restrictiveness, we propose the concordance correlation coefficient as a complementary, or alternative, more prudent measure of a QSAR model to be externally predictive.

12. Mean activity coefficient measurement and thermodynamic modelling of the ternary mixed electrolyte (MgCl_2 + glucose + water) system at T = 298.15 K

International Nuclear Information System (INIS)

2015-01-01

Highlights: • The main goal of the work is to provide precise thermodynamic data for the system. • The method used was potentiometric method. • Pitzer ion interaction model and modified TCPC model were used. • The mass fractions of glucose were (0, 10, 20, 30 and 40)%. • The ionic strengths were from 0.0010 to 6.0000 mol · kg"−"1. - Abstract: In this work, the mean activity coefficients of MgCl_2 in pure water and (glucose + water) mixture solvent were determined using a galvanic cell without liquid junction potential of type: (Mg"2"+ + ISE)|MgCl_2 (m), glucose (wt.%), H_2O (100 wt.%)|AgCl|Ag. The measurements were performed at T = 298.15 K. Total ionic strengths were from (0.0010 to 6.0000) mol · kg"−"1. The various (glucose + water) mixed solvents contained (0, 10, 20, 30 and 40)% mass fractions percentage of glucose respectively. The mean activity coefficients measured were correlated with Pitzer ion interaction model and the Pitzer adjustable parameters were determined. Then these parameters were used to calculate the thermodynamics properties for under investigated system. The results showed that Pitzer ion interaction model can satisfactory describe the investigated system. The modified three-characteristic-parameter correlation (TCPC) model was applied to correlate the experimental activity coefficient data for under investigation electrolyte system, too.

13. MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION

Directory of Open Access Journals (Sweden)

2016-06-01

Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

14. Modeling individual differences in randomized experiments using growth models: Recommendations for design, statistical analysis and reporting of results of internet interventions

Directory of Open Access Journals (Sweden)

Hugo Hesser

2015-05-01

Full Text Available Growth models (also known as linear mixed effects models, multilevel models, and random coefficients models have the capability of studying change at the group as well as the individual level. In addition, these methods have documented advantages over traditional data analytic approaches in the analysis of repeated-measures data. These advantages include, but are not limited to, the ability to incorporate time-varying predictors, handle dependence among repeated observations in a very flexible manner, and to provide accurate estimates with missing data under fairly unrestrictive missing data assumptions. The flexibility of the growth curve modeling approach to the analysis of change makes it the preferred choice in the evaluation of direct, indirect and moderated intervention effects. Although offering many benefits, growth models present challenges in terms of design, analysis and reporting of results. This paper provides a nontechnical overview of growth models in the analysis of change in randomized experiments and advocates for their use in the field of internet interventions. Practical recommendations for design, analysis and reporting of results from growth models are provided.

15. PROBABILISTIC MODEL OF BEAM–PLASMA INTERACTION IN RANDOMLY INHOMOGENEOUS PLASMA

International Nuclear Information System (INIS)

Voshchepynets, A.; Krasnoselskikh, V.; Artemyev, A.; Volokitin, A.

2015-01-01

We propose a new model that describes beam–plasma interaction in the presence of random density fluctuations with a known probability distribution. We use the property that, for the given frequency, the probability distribution of the density fluctuations uniquely determines the probability distribution of the phase velocity of waves. We present the system as discrete and consisting of small, equal spatial intervals with a linear density profile. This approach allows one to estimate variations in wave energy density and particle velocity, depending on the density gradient on any small spatial interval. Because the characteristic time for the evolution of the electron distribution function and the wave energy is much longer than the time required for a single wave–particle resonant interaction over a small interval, we determine the description for the relaxation process in terms of averaged quantities. We derive a system of equations, similar to the quasi-linear approximation, with the conventional velocity diffusion coefficient D and the wave growth rate γ replaced by the average in phase space, by making use of the probability distribution for phase velocities and by assuming that the interaction in each interval is independent of previous interactions. Functions D and γ are completely determined by the distribution function for the amplitudes of the fluctuations. For the Gaussian distribution of the density fluctuations, we show that the relaxation process is determined by the ratio of beam velocity to plasma thermal velocity, the dispersion of the fluctuations, and the width of the beam in the velocity space

16. Properties of Traffic Risk Coefficient

Science.gov (United States)

Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu

2009-10-01

We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.

17. E6 unification model building. III. Clebsch-Gordan coefficients in E6 tensor products of the 27 with higher dimensional representations

International Nuclear Information System (INIS)

Anderson, Gregory W.; Blazek, Tomas

2005-01-01

E 6 is an attractive group for unification model building. However, the complexity of a rank 6 group makes it nontrivial to write down the structure of higher dimensional operators in an E 6 theory in terms of the states labeled by quantum numbers of the standard model gauge group. In this paper, we show the results of our computation of the Clebsch-Gordan coefficients for the products of the 27 with irreducible representations of higher dimensionality: 78, 351, 351 ' , 351, and 351 ' . Application of these results to E 6 model building involving higher dimensional operators is straightforward

18. Recent developments in exponential random graph (p*) models for social networks

NARCIS (Netherlands)

Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa

This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over

19. Effect of energy equation in one control-volume bulk-flow model for the prediction of labyrinth seal dynamic coefficients

Science.gov (United States)

Cangioli, Filippo; Pennacchi, Paolo; Vannini, Giuseppe; Ciuchicchi, Lorenzo

2018-01-01

The influence of sealing components on the rotordynamic stability of turbomachinery has become a key topic because the oil and gas market is increasingly demanding high rotational speeds and high efficiency. This leads the turbomachinery manufacturers to design higher flexibility ratios and to reduce the clearance of the seals. Accurate prediction of the effective damping of seals is critical to avoid instability problems; in recent years, "negative-swirl" swirl brakes have been used to reverse the circumferential direction of the inlet flow, which changes the sign of the cross-coupled stiffness coefficients and generates stabilizing forces. Experimental tests for a teeth-on-stator labyrinth seal were performed by manufacturers with positive and negative pre-swirl values to investigate the pre-swirl effect on the cross-coupled stiffness coefficient. Those results are used as a benchmark in this paper. To analyse the rotor-fluid interaction in the seals, the bulk-flow numeric approach is more time efficient than computational fluid dynamics (CFD). Although the accuracy of the coefficients prediction in bulk-flow models is satisfactory for liquid phase application, the accuracy of the results strongly depends on the operating conditions in the case of the gas phase. In this paper, the authors propose an improvement in the state-of-the-art bulk-flow model by introducing the effect of the energy equation in the zeroth-order solution to better characterize real gas properties due to the enthalpy variation along the seal cavities. The consideration of the energy equation allows for a better estimation of the coefficients in the case of a negative pre-swirl ratio, therefore, it extend the prediction fidelity over a wide range of operating conditions. The numeric results are also compared to the state-of-the-art bulk-flow model, which highlights the improvement in the model.

20. The reverse effects of random perturbation on discrete systems for single and multiple population models

International Nuclear Information System (INIS)

Kang, Li; Tang, Sanyi

2016-01-01

Highlights: • The discrete single species and multiple species models with random perturbation are proposed. • The complex dynamics and interesting bifurcation behavior have been investigated. • The reverse effects of random perturbation on discrete systems have been discussed and revealed. • The main results can be applied for pest control and resources management. - Abstract: The natural species are likely to present several interesting and complex phenomena under random perturbations, which have been confirmed by simple mathematical models. The important questions are: how the random perturbations influence the dynamics of the discrete population models with multiple steady states or multiple species interactions? and is there any different effects for single species and multiple species models with random perturbation? To address those interesting questions, we have proposed the discrete single species model with two stable equilibria and the host-parasitoid model with Holling type functional response functions to address how the random perturbation affects the dynamics. The main results indicate that the random perturbation does not change the number of blurred orbits of the single species model with two stable steady states compared with results for the classical Ricker model with same random perturbation, but it can strength the stability. However, extensive numerical investigations depict that the random perturbation does not influence the complexities of the host-parasitoid models compared with the results for the models without perturbation, while it does increase the period of periodic orbits doubly. All those confirm that the random perturbation has a reverse effect on the dynamics of the discrete single and multiple population models, which could be applied in reality including pest control and resources management.