A Seemingly Unrelated Poisson Regression Model
King, Gary
1989-01-01
This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Affine Poisson Groups and WZW Model
Directory of Open Access Journals (Sweden)
Ctirad Klimcík
2008-01-01
Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.
Wide-area traffic: The failure of Poisson modeling
Energy Technology Data Exchange (ETDEWEB)
Paxson, V.; Floyd, S.
1994-08-01
Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Modified Regression Correlation Coefficient for Poisson Regression Model
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
Inhibition in speed and concentration tests: The Poisson inhibition model
Smit, J.C.; Ven, A.H.G.S. van der
1995-01-01
A new model is presented to account for the reaction time fluctuations in concentration tests. The model is a natural generalization of an earlier model, the so-called Poisson-Erlang model, published by Pieters & van der Ven (1982). First, a description is given of the type of tasks for which the
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)
DEFF Research Database (Denmark)
Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis
We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...
Poisson-generalized gamma empirical Bayes model for disease ...
African Journals Online (AJOL)
In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...
Double generalized linear compound poisson models to insurance claims data
DEFF Research Database (Denmark)
Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo
2017-01-01
This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... in a finite sample framework. The simulation studies are also used to validate the fitting algorithms and check the computational implementation. Furthermore, we investigate the impact of an unsuitable choice for the response variable distribution on both mean and dispersion parameter estimates. We provide R...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....
The coupling of Poisson sigma models to topological backgrounds
Energy Technology Data Exchange (ETDEWEB)
Rosa, Dario [School of Physics, Korea Institute for Advanced Study,Seoul 02455 (Korea, Republic of)
2016-12-13
We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical interpretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity.
Poisson sigma model with branes and hyperelliptic Riemann surfaces
International Nuclear Information System (INIS)
Ferrario, Andrea
2008-01-01
We derive the explicit form of the superpropagators in the presence of general boundary conditions (coisotropic branes) for the Poisson sigma model. This generalizes the results presented by Cattaneo and Felder [''A path integral approach to the Kontsevich quantization formula,'' Commun. Math. Phys. 212, 591 (2000)] and Cattaneo and Felder ['Coisotropic submanifolds in Poisson geometry and branes in the Poisson sigma model', Lett. Math. Phys. 69, 157 (2004)] for Kontsevich's angle function [Kontsevich, M., 'Deformation quantization of Poisson manifolds I', e-print arXiv:hep.th/0101170] used in the deformation quantization program of Poisson manifolds. The relevant superpropagators for n branes are defined as gauge fixed homotopy operators of a complex of differential forms on n sided polygons P n with particular ''alternating'' boundary conditions. In the presence of more than three branes we use first order Riemann theta functions with odd singular characteristics on the Jacobian variety of a hyperelliptic Riemann surface (canonical setting). In genus g the superpropagators present g zero mode contributions
An application of the Autoregressive Conditional Poisson (ACP) model
CSIR Research Space (South Africa)
Holloway, Jennifer P
2010-11-01
Full Text Available When modelling count data that comes in the form of a time series, the static Poisson regression and standard time series models are often not appropriate. A current study therefore involves the evaluation of several observation-driven and parameter...
2D sigma models and differential Poisson algebras
Energy Technology Data Exchange (ETDEWEB)
Arias, Cesar [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Boulanger, Nicolas [Service de Mécanique et Gravitation, Université de Mons - UMONS,20 Place du Parc, 7000 Mons (Belgium); Laboratoire de Mathématiques et Physique Théorique,Unité Mixte de Recherche 7350 du CNRS, Fédération de Recherche 2964 Denis Poisson,Université François Rabelais, Parc de Grandmont, 37200 Tours (France); Sundell, Per [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Torres-Gomez, Alexander [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile-UACh,Valdivia (Chile)
2015-08-18
We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model.
A hybrid sampler for Poisson-Kingman mixture models
Lomeli, M.; Favaro, S.; Teh, Y. W.
2015-01-01
This paper concerns the introduction of a new Markov Chain Monte Carlo scheme for posterior sampling in Bayesian nonparametric mixture models with priors that belong to the general Poisson-Kingman class. We present a novel compact way of representing the infinite dimensional component of the model such that while explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation results demonstrating the e...
Herder, E.; Brusilovsky, Peter; Corbett, Albert; de Rosis, Fiorella
2003-01-01
For providing users with navigation aids that best serve their needs, user models for adaptive hypermedia should include user navigation patterns. This paper describes elements needed and how these elements can be gathered.
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.
Numerical solution of dynamic equilibrium models under Poisson uncertainty
DEFF Research Database (Denmark)
Posch, Olaf; Trimborn, Timo
2013-01-01
We propose a simple and powerful numerical algorithm to compute the transition process in continuous-time dynamic equilibrium models with rare events. In this paper we transform the dynamic system of stochastic differential equations into a system of functional differential equations...... of the retarded type. We apply the Waveform Relaxation algorithm, i.e., we provide a guess of the policy function and solve the resulting system of (deterministic) ordinary differential equations by standard techniques. For parametric restrictions, analytical solutions to the stochastic growth model and a novel...... solution to Lucas' endogenous growth model under Poisson uncertainty are used to compute the exact numerical error. We show how (potential) catastrophic events such as rare natural disasters substantially affect the economic decisions of households....
Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model
Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.
2014-03-01
Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.
Modeling the number of car theft using Poisson regression
Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura
2016-10-01
Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.
Estimation of a Non-homogeneous Poisson Model: An Empirical ...
African Journals Online (AJOL)
This article aims at applying the Nonhomogeneous Poisson process to trends of economic development. For this purpose, a modified Nonhomogeneous Poisson process is derived when the intensity rate is considered as a solution of stochastic differential equation which satisfies the geometric Brownian motion. The mean ...
Electroneutral models for dynamic Poisson-Nernst-Planck systems
Song, Zilong; Cao, Xiulei; Huang, Huaxiong
2018-01-01
The Poisson-Nernst-Planck (PNP) system is a standard model for describing ion transport. In many applications, e.g., ions in biological tissues, the presence of thin boundary layers poses both modeling and computational challenges. In this paper, we derive simplified electroneutral (EN) models where the thin boundary layers are replaced by effective boundary conditions. There are two major advantages of EN models. First, it is much cheaper to solve them numerically. Second, EN models are easier to deal with compared to the original PNP system; therefore, it would also be easier to derive macroscopic models for cellular structures using EN models. Even though the approach used here is applicable to higher-dimensional cases, this paper mainly focuses on the one-dimensional system, including the general multi-ion case. Using systematic asymptotic analysis, we derive a variety of effective boundary conditions directly applicable to the EN system for the bulk region. This EN system can be solved directly and efficiently without computing the solution in the boundary layer. The derivation is based on matched asymptotics, and the key idea is to bring back higher-order contributions into the effective boundary conditions. For Dirichlet boundary conditions, the higher-order terms can be neglected and the classical results (continuity of electrochemical potential) are recovered. For flux boundary conditions, higher-order terms account for the accumulation of ions in boundary layer and neglecting them leads to physically incorrect solutions. To validate the EN model, numerical computations are carried out for several examples. Our results show that solving the EN model is much more efficient than the original PNP system. Implemented with the Hodgkin-Huxley model, the computational time for solving the EN model is significantly reduced without sacrificing the accuracy of the solution due to the fact that it allows for relatively large mesh and time-step sizes.
Lord, Dominique; Geedipally, Srinivas Reddy; Guikema, Seth D
2010-08-01
The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.
The Rasch Poisson counts model for incomplete data : An application of the EM algorithm
Jansen, G.G.H.
Rasch's Poisson counts model is a latent trait model for the situation in which K tests are administered to N examinees and the test score is a count [e.g., the repeated occurrence of some event, such as the number of items completed or the number of items answered (in)correctly]. The Rasch Poisson
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
The Lie-Poisson structure of integrable classical non-linear sigma models
International Nuclear Information System (INIS)
Bordemann, M.; Forger, M.; Schaeper, U.; Laartz, J.
1993-01-01
The canonical structure of classical non-linear sigma models on Riemannian symmetric spaces, which constitute the most general class of classical non-linear sigma models known to be integrable, is shown to be governed by a fundamental Poisson bracket relation that fits into the r-s-matrix formalism for non-ultralocal integrable models first discussed by Maillet. The matrices r and s are computed explicitly and, being field dependent, satisfy fundamental Poisson bracket relations of their own, which can be expressed in terms of a new numerical matrix c. It is proposed that all these Poisson brackets taken together are representation conditions for a new kind of algebra which, for this class of models, replaces the classical Yang-Baxter algebra governing the canonical structure of ultralocal models. The Poisson brackets for the transition matrices are also computed, and the notorious regularization problem associated with the definition of the Poisson brackets for the monodromy matrices is discussed. (orig.)
DEFF Research Database (Denmark)
Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag
This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time...... series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of the asymptotic covariance, which is used in the simulations and the analysis of some...
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.
Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio
2014-11-24
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine
[Application of detecting and taking overdispersion into account in Poisson regression model].
Bouche, G; Lepage, B; Migeot, V; Ingrand, P
2009-08-01
Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.
Personal lifelong user model clouds
DEFF Research Database (Denmark)
Dolog, Peter; Kay, Judy; Kummerfeld, Bob
This paper explores an architecture for very long term user modelling, based upon personal user model clouds. These ensure that the individual's applications can access their model whenever it is needed. At the same time, the user can control the use of their user model. So, they can ensure...... which combines both. Finally we discuss implications of our approach for consistency and freshness of the user model information....... it is accessed only when and where they wish, by applications that they wish. We consider the challenges of representing user models so that they can be reused by multiple applications. We indicate potential synergies between distributed and centralised user modelling architectures, proposing an architecture...
DEFF Research Database (Denmark)
Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag
2009-01-01
In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies to the condi......In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies...... to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider...... ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen...
Directory of Open Access Journals (Sweden)
Hossein Fallahzadeh
2017-05-01
Full Text Available Introduction: Different statistical methods can be used to analyze fertility data. When the response variable is discrete, Poisson model is applied. If the condition does not hold for the Poisson model, its generalized model will be applied. The goal of this study was to compare the efficiency of generalized Poisson regression model with the standard Poisson regression model in estimating the coefficient of effective factors onthe current number of children. Methods: This is a cross-sectional study carried out on a populationof married women within the age range of15-49 years in Kashan, Iran. The cluster sampling method was used for data collection. Clusters consisted ofthe urbanblocksdeterminedby the municipality.Atotal number of10clusters each containing30households was selected according to the health center's framework. The necessary data were then collected through a self-madequestionnaireanddirectinterviewswith women under study. Further, the data analysiswas performed by usingthe standard and generalizedPoisson regression models through theRsoftware. Results: The average number of children for each woman was 1.45 with a variance of 1.073.A significant relationship was observed between the husband's age, number of unwanted pregnancies, and the average durationof breastfeeding with the present number of children in the two standard and generalized Poisson regression models (p < 0.05.The mean ageof women participating in thisstudy was33.1± 7.57 years (from 25.53 years to 40.67, themean age of marriage was 20.09 ± 3.82 (from16.27 years to23.91, and themean age of their husbands was 37.9 ± 8.4years (from 29.5 years to 46.3. In the current study, the majority of women werein the age range of 30-35years old with the medianof 32years, however, most ofmen were in the age range of 35-40yearswith the median of37years. While 236of women did not have unwanted pregnancies, most participants of the present study had one unwanted pregnancy
DEFF Research Database (Denmark)
Fokianos, Konstantinos; Rahbæk, Anders; Tjøstheim, Dag
This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily...
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Statistical modelling of Poisson/log-normal data
International Nuclear Information System (INIS)
Miller, G.
2007-01-01
In statistical data fitting, self consistency is checked by examining the closeness of the quantity Χ 2 /NDF to 1, where Χ 2 is the sum of squares of data minus fit divided by standard deviation, and NDF is the number of data minus the number of fit parameters. In order to calculate Χ 2 one needs an expression for the standard deviation. In this note several alternative expressions for the standard deviation of data distributed according to a Poisson/log-normal distribution are proposed and evaluated by Monte Carlo simulation. Two preferred alternatives are identified. The use of replicate data to obtain uncertainty is problematic for a small number of replicates. A method to correct this problem is proposed. The log-normal approximation is good for sufficiently positive data. A modification of the log-normal approximation is proposed, which allows it to be used to test the hypothesis that the true value is zero. (authors)
Directory of Open Access Journals (Sweden)
Rodrigues-Motta Mariana
2008-07-01
Full Text Available Abstract Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.
Simulation on Poisson and negative binomial models of count road accident modeling
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.
Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.
Bayesian spatial modeling of HIV mortality via zero-inflated Poisson models.
Musal, Muzaffer; Aktekin, Tevfik
2013-01-30
In this paper, we investigate the effects of poverty and inequality on the number of HIV-related deaths in 62 New York counties via Bayesian zero-inflated Poisson models that exhibit spatial dependence. We quantify inequality via the Theil index and poverty via the ratios of two Census 2000 variables, the number of people under the poverty line and the number of people for whom poverty status is determined, in each Zip Code Tabulation Area. The purpose of this study was to investigate the effects of inequality and poverty in addition to spatial dependence between neighboring regions on HIV mortality rate, which can lead to improved health resource allocation decisions. In modeling county-specific HIV counts, we propose Bayesian zero-inflated Poisson models whose rates are functions of both covariate and spatial/random effects. To show how the proposed models work, we used three different publicly available data sets: TIGER Shapefiles, Census 2000, and mortality index files. In addition, we introduce parameter estimation issues of Bayesian zero-inflated Poisson models and discuss MCMC method implications. Copyright © 2012 John Wiley & Sons, Ltd.
The Rasch Poisson Counts Model for Incomplete Data: An Application of the EM Algorithm.
Jansen, Margo G. H.
1995-01-01
The Rasch Poisson counts model is a latent trait model for the situation in which "K" tests are administered to "N" examinees and the test score is a count (repeated number of some event). A mixed model is presented that applies the EM algorithm and that can allow for missing data. (SLD)
Brownian motion and parabolic Anderson model in a renormalized Poisson potential
Chen, Xia; Kulik, Alexey M.
2012-01-01
A method known as renormalization is proposed for constructing some more physically realistic random potentials in a Poisson cloud. The Brownian motion in the renormalized random potential and related parabolic Anderson models are modeled. With the renormalization, for example, the models consistent to Newton’s law of universal attraction can be rigorously constructed.
A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA
WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value
1993-01-01
In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing
Poisson regression for modeling count and frequency outcomes in trauma research.
Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T
2008-10-01
The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.
Sepúlveda, Nuno
2013-02-26
Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.
Misspecified poisson regression models for large-scale registry data
DEFF Research Database (Denmark)
Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.
2016-01-01
working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
The Fixed-Effects Zero-Inflated Poisson Model with an Application to Health Care Utilization
Majo, M.C.; van Soest, A.H.O.
2011-01-01
Response variables that are scored as counts and that present a large number of zeros often arise in quantitative health care analysis. We define a zero-in flated Poisson model with fixed-effects in both of its equations to identify respondent and health-related characteristics associated with
Modeling of Electrokinetic Processes Using the Nernst-Plank-Poisson System
DEFF Research Database (Denmark)
Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.
2010-01-01
Electrokinetic processes are known as the mobilization of species within the pore solution of porous materials under the effect of an external electric field. A finite elements model was implemented and used for the integration of the coupled Nernst-Plank-Poisson system of equations in order...
Multiple mortality modeling in Poisson Lee-Carter framework
D'Amato, V.; Haberman, S.; Piscopo, G.; Russolillo, M.; Trapani, L.
2016-01-01
The academic literature in longevity field has recently focused on models for detecting multiple population trends (D'Amato et al., 2012b; Njenga and Sherris, 2011; Russolillo et al., 2011, etc.). In particular, increasing interest has been shown about "related" population dynamics or "parent" populations characterized by similar socioeconomic conditions and eventually also by geographical proximity. These studies suggest dependence across multiple populations and common long-run relationship...
Business Cycle Models with Embodied Technological Change and Poisson Shocks
Schlegel, Christoph
2004-01-01
The first part analyzes an Endogenous Business Cycle model with embodied technological change. Households take an optimal decision about their spending for consumption and financing of R&D. The probability of a technology invention occurring is an increasing function of aggregate R&D expenditure in the whole economy. New technologies bring higher productivity, but rather than applying to the whole capital stock, they require a new vintage of capital, which first has to be accu...
The Poisson model limits in NBA basketball: Complexity in team sports
Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa
2016-12-01
Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.
Hidden Markov models for zero-inflated Poisson counts with an application to substance use.
DeSantis, Stacia M; Bandyopadhyay, Dipankar
2011-06-30
Paradigms for substance abuse cue-reactivity research involve pharmacological or stressful stimulation designed to elicit stress and craving responses in cocaine-dependent subjects. It is unclear as to whether stress induced from participation in such studies increases drug-seeking behavior. We propose a 2-state Hidden Markov model to model the number of cocaine abuses per week before and after participation in a stress-and cue-reactivity study. The hypothesized latent state corresponds to 'high' or 'low' use. To account for a preponderance of zeros, we assume a zero-inflated Poisson model for the count data. Transition probabilities depend on the prior week's state, fixed demographic variables, and time-varying covariates. We adopt a Bayesian approach to model fitting, and use the conditional predictive ordinate statistic to demonstrate that the zero-inflated Poisson hidden Markov model outperforms other models for longitudinal count data. Copyright © 2011 John Wiley & Sons, Ltd.
Poisson regression approach for modeling fatal injury rates amongst Malaysian workers
International Nuclear Information System (INIS)
Kamarulzaman Ibrahim; Heng Khai Theng
2005-01-01
Many safety studies are based on the analysis carried out on injury surveillance data. The injury surveillance data gathered for the analysis include information on number of employees at risk of injury in each of several strata where the strata are defined in terms of a series of important predictor variables. Further insight into the relationship between fatal injury rates and predictor variables may be obtained by the poisson regression approach. Poisson regression is widely used in analyzing count data. In this study, poisson regression is used to model the relationship between fatal injury rates and predictor variables which are year (1995-2002), gender, recording system and industry type. Data for the analysis were obtained from PERKESO and Jabatan Perangkaan Malaysia. It is found that the assumption that the data follow poisson distribution has been violated. After correction for the problem of over dispersion, the predictor variables that are found to be significant in the model are gender, system of recording, industry type, two interaction effects (interaction between recording system and industry type and between year and industry type). Introduction Regression analysis is one of the most popular
Stochastic Interest Model Based on Compound Poisson Process and Applications in Actuarial Science
Directory of Open Access Journals (Sweden)
Shilong Li
2017-01-01
Full Text Available Considering stochastic behavior of interest rates in financial market, we construct a new class of interest models based on compound Poisson process. Different from the references, this paper describes the randomness of interest rates by modeling the force of interest with Poisson random jumps directly. To solve the problem in calculation of accumulated interest force function, one important integral technique is employed. And a conception called the critical value is introduced to investigate the validity condition of this new model. We also discuss actuarial present values of several life annuities under this new interest model. Simulations are done to illustrate the theoretical results and the effect of parameters in interest model on actuarial present values is also analyzed.
How does Poisson kriging compare to the popular BYM model for mapping disease risks?
Directory of Open Access Journals (Sweden)
Gebreab Samson
2008-02-01
Full Text Available Abstract Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1 it is easier to implement and less CPU intensive, and 2 it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM model and Poisson kriging (point and area-to-area implementations were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1 state of Indiana that consists of 92 counties of fairly similar size and shape, and 2 four states in the Western US (Arizona, California, Nevada and Utah forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models. Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county
Directory of Open Access Journals (Sweden)
Lope Virginia
2009-01-01
Full Text Available Abstract Background Non-Hodgkin's lymphomas (NHLs have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model. Results Only proximity of paper industries to population centres (>2 km could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27. Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of
Numerical solution of continuous-time DSGE models under Poisson uncertainty
DEFF Research Database (Denmark)
Posch, Olaf; Trimborn, Timo
We propose a simple and powerful method for determining the transition process in continuous-time DSGE models under Poisson uncertainty numerically. The idea is to transform the system of stochastic differential equations into a system of functional differential equations of the retarded type. We...... then use the Waveform Relaxation algorithm to provide a guess of the policy function and solve the resulting system of ordinary differential equations by standard methods and fix-point iteration. Analytical solutions are provided as a benchmark from which our numerical method can be used to explore broader...... classes of models. We illustrate the algorithm simulating both the stochastic neoclassical growth model and the Lucas model under Poisson uncertainty which is motivated by the Barro-Rietz rare disaster hypothesis. We find that, even for non-linear policy functions, the maximum (absolute) error is very...
A Local Poisson Graphical Model for inferring networks from sequencing data.
Allen, Genevera I; Liu, Zhandong
2013-09-01
Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.
A Poisson-Fault Model for Testing Power Transformers in Service
Directory of Open Access Journals (Sweden)
Dengfu Zhao
2014-01-01
Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.
Mohebbi, Mohammadreza; Wolfe, Rory; Jolley, Damien
2011-10-03
Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. We used age standardised incidence ratios (SIRs) of esophageal cancer (EC) from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1) Poisson regression with agglomeration-specific nonspatial random effects; (2) Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC), Akaike's information criterion (AIC) and adjusted pseudo R2, were used for model comparison. A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.
Directory of Open Access Journals (Sweden)
Jolley Damien
2011-10-01
Full Text Available Abstract Background Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. Methods We used age standardised incidence ratios (SIRs of esophageal cancer (EC from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1 Poisson regression with agglomeration-specific nonspatial random effects; (2 Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC, Akaike's information criterion (AIC and adjusted pseudo R2, were used for model comparison. Results A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. Conclusions The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Studies on a Double Poisson-Geometric Insurance Risk Model with Interference
Directory of Open Access Journals (Sweden)
Yujuan Huang
2013-01-01
Full Text Available This paper mainly studies a generalized double Poisson-Geometric insurance risk model. By martingale and stopping time approach, we obtain adjustment coefficient equation, the Lundberg inequality, and the formula for the ruin probability. Also the Laplace transformation of the time when the surplus reaches a given level for the first time is discussed, and the expectation and its variance are obtained. Finally, we give the numerical examples.
Statistical Assessement on Cancer Risks of Ionizing Radiation and Smoking Based on Poisson Models
Tomita, Makoto; Otake, Masanori
2001-01-01
In many epidemiological and medical studies, a number of cancer motralities in catagorical classification may be considered as having Poisson distribution with person-years at risk depending upon time. The cancer mortalities have been evaluated by additive or multiplicative models with regard to background and excess risks based on several covariances such as sex, age at the time of bombings, time at exposure, or ionizing radiation, cigarette smoking habits, duration of smoking habits, etc. A...
Mental models and user training
Directory of Open Access Journals (Sweden)
Saša Zupanič
1997-01-01
Full Text Available One of the functions of the reference service is user training which means teaching users how to use the library and it's information sorces (nowadays mainly computerized systems. While the scientific understanding of teaching/learning process is shifting, changes also affect the methods of user training in libraries.Human-computer interaction (HCI is an interdisciplinary and a very active research area which studies how humans use computers - their mental and behavioral characteristics. The application of psychological theories to HCI are especially great on three areas: psychological (mental, conceptual models, individual differences, and error behavior.The mental models theory is powerful tool for understanding the ways in which users interact with an information system. Claims, based on this theory can affect the methods (conceptualization of user training and the overall design of information systems.
Zero inflated Poisson and negative binomial regression models: application in education.
Salehi, Masoud; Roudbari, Masoud
2015-01-01
The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester.
Modeling Repeated Count Data : Some Extensions of the Rasch Poisson Counts Model
van Duijn, M.A.J.; Jansen, Margo
1995-01-01
We consider data that can be summarized as an N X K table of counts-for example, test data obtained by administering K tests to N subjects. The cell entries y(ij) are assumed to be conditionally independent Poisson-distributed random variables, given the NK Poisson intensity parameters mu(ij). The
NORTRIP emission model user guide
Energy Technology Data Exchange (ETDEWEB)
Denby, Rolstad Bruce
2012-07-01
The NORTRIP emission model has been developed at NILU, in conjunction with other Nordic institutes, to model non-exhaust traffic induced emissions. This short summary document explains how to run the NORTRIP model from the MATLAB environment or by using the executable user interface version. It also provides brief information on input files and the model architecture.(Author)
Poisson versus threshold models for genetic analysis of clinical mastitis in US Holsteins.
Vazquez, A I; Weigel, K A; Gianola, D; Bates, D M; Perez-Cabal, M A; Rosa, G J M; Chang, Y M
2009-10-01
Typically, clinical mastitis is coded as the presence or absence of disease in a given lactation, and records are analyzed with either linear models or binary threshold models. Because the presence of mastitis may include cows with multiple episodes, there is a loss of information when counts are treated as binary responses. Poisson models are appropriated for random variables measured as the number of events, and although these models are used extensively in studying the epidemiology of mastitis, they have rarely been used for studying the genetic aspects of mastitis. Ordinal threshold models are pertinent for ordered categorical responses; although one can hypothesize that the number of clinical mastitis episodes per animal reflects a continuous underlying increase in mastitis susceptibility, these models have rarely been used in genetic analysis of mastitis. The objective of this study was to compare probit, Poisson, and ordinal threshold models for the genetic evaluation of US Holstein sires for clinical mastitis. Mastitis was measured as a binary trait or as the number of mastitis cases. Data from 44,908 first-parity cows recorded in on-farm herd management software were gathered, edited, and processed for the present study. The cows were daughters of 1,861 sires, distributed over 94 herds. Predictive ability was assessed via a 5-fold cross-validation using 2 loss functions: mean squared error of prediction (MSEP) as the end point and a cost difference function. The heritability estimates were 0.061 for mastitis measured as a binary trait in the probit model and 0.085 and 0.132 for the number of mastitis cases in the ordinal threshold and Poisson models, respectively; because of scale differences, only the probit and ordinal threshold models are directly comparable. Among healthy animals, MSEP was smallest for the probit model, and the cost function was smallest for the ordinal threshold model. Among diseased animals, MSEP and the cost function were smallest
Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions
Directory of Open Access Journals (Sweden)
uma srivastava
2012-01-01
Full Text Available The paper deals with estimating shift point which occurs in any sequence of independent observations of Poisson model in statistical process control. This shift point occurs in the sequence when i.e. m life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
Unobtrusive user modeling for adaptive hypermedia
Holz, H.J.; Hofmann, K.; Reed, C.; Uchyigit, G.; Ma, M.Y.
2008-01-01
We propose a technique for user modeling in Adaptive Hypermedia (AH) that is unobtrusive at both the level of observable behavior and that of cognition. Unobtrusive user modeling is complementary to transparent user modeling. Unobtrusive user modeling induces user models appropriate for Educational
The Stochastic stability of a Logistic model with Poisson white noise
International Nuclear Information System (INIS)
Duan Dong-Hai; Xu Wei; Zhou Bing-Chang; Su Jun
2011-01-01
The stochastic stability of a logistic model subjected to the effect of a random natural environment, modeled as Poisson white noise process, is investigated. The properties of the stochastic response are discussed for calculating the Lyapunov exponent, which had proven to be the most useful diagnostic tool for the stability of dynamical systems. The generalised Itô differentiation formula is used to analyse the stochastic stability of the response. The results indicate that the stability of the response is related to the intensity and amplitude distribution of the environment noise and the growth rate of the species. (general)
The Stochastic stability of a Logistic model with Poisson white noise
Duan, Dong-Hai; Xu, Wei; Su, Jun; Zhou, Bing-Chang
2011-03-01
The stochastic stability of a logistic model subjected to the effect of a random natural environment, modeled as Poisson white noise process, is investigated. The properties of the stochastic response are discussed for calculating the Lyapunov exponent, which had proven to be the most useful diagnostic tool for the stability of dynamical systems. The generalised Itô differentiation formula is used to analyse the stochastic stability of the response. The results indicate that the stability of the response is related to the intensity and amplitude distribution of the environment noise and the growth rate of the species. Project supported by the National Natural Science Foundation of China (Grant Nos. 10872165 and 10932009).
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Klett, T.R.; Charpentier, Ronald R.
2003-01-01
The USGS FORSPAN model is designed for the assessment of continuous accumulations of crude oil, natural gas, and natural gas liquids (collectively called petroleum). Continuous (also called ?unconventional?) accumulations have large spatial dimensions and lack well defined down-dip petroleum/water contacts. Oil and natural gas therefore are not localized by buoyancy in water in these accumulations. Continuous accumulations include ?tight gas reservoirs,? coalbed gas, oil and gas in shale, oil and gas in chalk, and shallow biogenic gas. The FORSPAN model treats a continuous accumulation as a collection of petroleumcontaining cells for assessment purposes. Each cell is capable of producing oil or gas, but the cells may vary significantly from one another in their production (and thus economic) characteristics. The potential additions to reserves from continuous petroleum resources are calculated by statistically combining probability distributions of the estimated number of untested cells having the potential for additions to reserves with the estimated volume of oil and natural gas that each of the untested cells may potentially produce (total recovery). One such statistical method for combination of number of cells with total recovery, used by the USGS, is called ACCESS.
Use of Poisson spatiotemporal regression models for the Brazilian Amazon Forest: malaria count data.
Achcar, Jorge Alberto; Martinez, Edson Zangiacomi; Souza, Aparecida Doniseti Pires de; Tachibana, Vilma Mayumi; Flores, Edilson Ferreira
2011-01-01
Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using bayesian spatiotemporal methods. We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a bayesian approach and Markov Chain Monte Carlo (MCMC) methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI) are important in the prediction of malaria cases. It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the bayesian paradigm is a good strategy for modeling malaria counts.
Coley, Rebecca Yates; Browna, Elizabeth R.
2016-01-01
Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051
Non-Poisson counting statistics of a hybrid G-M counter dead time model
International Nuclear Information System (INIS)
Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P.
2007-01-01
The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model
The modified drift-Poisson model: Analogies with geophysical flows and Rossby waves
International Nuclear Information System (INIS)
Castillo-Negrete, D. del; Finn, J. M.; Barnes, D. C.
1999-01-01
We discuss an analogy between magnetically confined nonneutral plasmas and geophysical fluid dynamics. The analogy has its roots in the modified drift Poisson model, a recently proposed model that takes into account the plasma compression due to the variations of the plasma length [1]. The conservation of the line integrated density in the new model is analogous to the conservation of potential vorticity in the shallow water equations, and the variation of the plasma length is isomorphic to variations in the Coriolis parameter with latitude or to topography variations in the quasigeostrophic dynamics. We discuss a new class of linear and nonlinear waves that owe their existence to the variations of the plasma length. These modes are the analog of Rossby waves in geophysical flows
The Allan variance in the presence of a compound Poisson process modelling clock frequency jumps
Formichella, Valerio
2016-12-01
Atomic clocks can be affected by frequency jumps occurring at random times and with a random amplitude. The frequency jumps degrade the clock stability and this is captured by the Allan variance. In this work we assume that the random jumps can be modelled by a compound Poisson process, independent of the other stochastic and deterministic processes affecting the clock stability. Then, we derive the analytical expression of the Allan variance of a jumping clock. We find that the analytical Allan variance does not depend on the actual shape of the jumps amplitude distribution, but only on its first and second moments, and its final form is the same as for a clock with a random walk of frequency and a frequency drift. We conclude that the Allan variance cannot distinguish between a compound Poisson process and a Wiener process, hence it may not be sufficient to correctly identify the fundamental noise processes affecting a clock. The result is general and applicable to any oscillator, whose frequency is affected by a jump process with the described statistics.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Zero-Inflated Poisson Modeling of Fall Risk Factors in Community-Dwelling Older Adults.
Jung, Dukyoo; Kang, Younhee; Kim, Mi Young; Ma, Rye-Won; Bhandari, Pratibha
2016-02-01
The aim of this study was to identify risk factors for falls among community-dwelling older adults. The study used a cross-sectional descriptive design. Self-report questionnaires were used to collect data from 658 community-dwelling older adults and were analyzed using logistic and zero-inflated Poisson (ZIP) regression. Perceived health status was a significant factor in the count model, and fall efficacy emerged as a significant predictor in the logistic models. The findings suggest that fall efficacy is important for predicting not only faller and nonfaller status but also fall counts in older adults who may or may not have experienced a previous fall. The fall predictors identified in this study--perceived health status and fall efficacy--indicate the need for fall-prevention programs tailored to address both the physical and psychological issues unique to older adults. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Hyungsuk Tak
2017-06-01
Full Text Available Rgbp is an R package that provides estimates and verifiable confidence intervals for random effects in two-level conjugate hierarchical models for overdispersed Gaussian, Poisson, and binomial data. Rgbp models aggregate data from k independent groups summarized by observed sufficient statistics for each random effect, such as sample means, possibly with covariates. Rgbp uses approximate Bayesian machinery with unique improper priors for the hyper-parameters, which leads to good repeated sampling coverage properties for random effects. A special feature of Rgbp is an option that generates synthetic data sets to check whether the interval estimates for random effects actually meet the nominal confidence levels. Additionally, Rgbp provides inference statistics for the hyper-parameters, e.g., regression coefficients.
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
Model Manipulation for End-User Modelers
DEFF Research Database (Denmark)
Acretoaie, Vlad
End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... of these proposals. To achieve its first goal, the thesis presents the findings of a Systematic Mapping Study showing that human factors topics are scarcely and relatively poorly addressed in model transformation research. Motivated by these findings, the thesis explores the requirements of end-user modelers......, and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor...
Identification of temporal patterns in the seismicity of Sumatra using Poisson Hidden Markov models
Directory of Open Access Journals (Sweden)
Katerina Orfanogiannaki
2014-05-01
Full Text Available On 26 December 2004 and 28 March 2005 two large earthquakes occurred between the Indo-Australian and the southeastern Eurasian plates with moment magnitudes Mw=9.1 and Mw=8.6, respectively. Complete data (mb≥4.2 of the post-1993 time interval have been used to apply Poisson Hidden Markov models (PHMMs for identifying temporal patterns in the time series of the two earthquake sequences. Each time series consists of earthquake counts, in given and constant time units, in the regions determined by the aftershock zones of the two mainshocks. In PHMMs each count is generated by one of m different Poisson processes that are called states. The series of states is unobserved and is in fact a Markov chain. The model incorporates a varying seismicity rate, it assigns a different rate to each state and it detects the changes on the rate over time. In PHMMs unobserved factors, related to the local properties of the region are considered affecting the earthquake occurrence rate. Estimation and interpretation of the unobserved sequence of states that underlie the data contribute to better understanding of the geophysical processes that take place in the region. We applied PHMMs to the time series of the two mainshocks and we estimated the unobserved sequences of states that underlie the data. The results obtained showed that the region of the 26 December 2004 earthquake was in state of low seismicity during almost the entire observation period. On the contrary, in the region of the 28 March 2005 earthquake the seismic activity is attributed to triggered seismicity, due to stress transfer from the region of the 2004 mainshock.
Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.
2012-01-01
Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.
Poisson-Boltzmann theory of charged colloids: limits of the cell model for salty suspensions
International Nuclear Information System (INIS)
Denton, A R
2010-01-01
Thermodynamic properties of charge-stabilized colloidal suspensions and polyelectrolyte solutions are commonly modelled by implementing the mean-field Poisson-Boltzmann (PB) theory within a cell model. This approach models a bulk system by a single macroion, together with counterions and salt ions, confined to a symmetrically shaped, electroneutral cell. While easing numerical solution of the nonlinear PB equation, the cell model neglects microion-induced interactions and correlations between macroions, precluding modelling of macroion ordering phenomena. An alternative approach, which avoids the artificial constraints of cell geometry, exploits the mapping of a macroion-microion mixture onto a one-component model of pseudo-macroions governed by effective interparticle interactions. In practice, effective-interaction models are usually based on linear-screening approximations, which can accurately describe strong nonlinear screening only by incorporating an effective (renormalized) macroion charge. Combining charge renormalization and linearized PB theories, in both the cell model and an effective-interaction (cell-free) model, we compute osmotic pressures of highly charged colloids and monovalent microions, in Donnan equilibrium with a salt reservoir, over a range of concentrations. By comparing predictions with primitive model simulation data for salt-free suspensions, and with predictions from nonlinear PB theory for salty suspensions, we chart the limits of both the cell model and linear-screening approximations in modelling bulk thermodynamic properties. Up to moderately strong electrostatic couplings, the cell model proves accurate for predicting osmotic pressures of deionized (counterion-dominated) suspensions. With increasing salt concentration, however, the relative contribution of macroion interactions to the osmotic pressure grows, leading predictions from the cell and effective-interaction models to deviate. No evidence is found for a liquid
Ribeiro, Manuel Castro; Sousa, António Jorge; Pereira, Maria João
2016-05-01
The geographical distribution of health outcomes is influenced by socio-economic and environmental factors operating on different spatial scales. Geographical variations in relationships can be revealed with semi-parametric Geographically Weighted Poisson Regression (sGWPR), a model that can combine both geographically varying and geographically constant parameters. To decide whether a parameter should vary geographically, two models are compared: one in which all parameters are allowed to vary geographically and one in which all except the parameter being evaluated are allowed to vary geographically. The model with the lower corrected Akaike Information Criterion (AICc) is selected. Delivering model selection exclusively according to the AICc might hide important details in spatial variations of associations. We propose assisting the decision by using a Linear Model of Coregionalization (LMC). Here we show how LMC can refine sGWPR on ecological associations between socio-economic and environmental variables and low birth weight outcomes in the west-north-central region of Portugal. Copyright © 2016 Elsevier Ltd. All rights reserved.
SnIPRE: selection inference using a Poisson random effects model.
Directory of Open Access Journals (Sweden)
Kirsten E Eilertson
Full Text Available We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data.
International Nuclear Information System (INIS)
Lewis, J.C.
2011-01-01
In a recent paper (Lewis, 2008) a class of models suitable for application to collision-sequence interference was introduced. In these models velocities are assumed to be completely randomized in each collision. The distribution of velocities was assumed to be Gaussian. The integrated induced dipole moment μk, for vector interference, or the scalar modulation μk, for scalar interference, was assumed to be a function of the impulse (integrated force) fk, or its magnitude fk, experienced by the molecule in a collision. For most of (Lewis, 2008) it was assumed that μk fk and μk fk, but it proved to be possible to extend the models, so that the magnitude of the induced dipole moment is equal to an arbitrary power or sum of powers of the intermolecular force. This allows estimates of the in filling of the interference dip by the dis proportionality of the induced dipole moment and force. One particular such model, using data from (Herman and Lewis, 2006), leads to the most realistic estimate for the in filling of the vector interference dip yet obtained. In (Lewis, 2008) the drastic assumption was made that collision times occurred at equal intervals. In the present paper that assumption is removed: the collision times are taken to form a Poisson process. This is much more realistic than the equal-intervals assumption. The interference dip is found to be a Lorentzian in this model
Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.
Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric
2017-02-01
In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and
Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong
2016-04-01
The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
Samat, N. A.; Ma'arof, S. H. Mohd Imam
2015-05-01
Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.
Ship-Track Models Based on Poisson-Distributed Port-Departure Times
National Research Council Canada - National Science Library
Heitmeyer, Richard
2006-01-01
... of those ships, and their nominal speeds. The probability law assumes that the ship departure times are Poisson-distributed with a time-varying departure rate and that the ship speeds and ship routes are statistically independent...
Distributed Bayesian Networks for User Modeling
DEFF Research Database (Denmark)
Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang
2006-01-01
The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme
Multilevel poisson regression modelling for determining factors of dengue fever cases in bandung
Arundina, Davila Rubianti; Tantular, Bertho; Pontoh, Resa Septiani
2017-03-01
Scralatina or Dengue Fever is a kind of fever caused by serotype virus which Flavivirus genus and be known as Dengue Virus. Dengue Fever caused by Aedes Aegipty Mosquito bites who infected by a dengue virus. The study was conducted in 151 villages in Bandung. Health Analysts believes that there are two factors that affect the dengue cases, Internal factor (individual) and external factor (environment). The data who used in this research is hierarchical data. The method is used for hierarchical data modelling is multilevel method. Which is, the level 1 is village and level 2 is sub-district. According exploration data analysis, the suitable Multilevel Method is Random Intercept Model. Penalized Quasi Likelihood (PQL) approach on multilevel Poisson is a proper analysis to determine factors that affecting dengue cases in the city of Bandung. Clean and Healthy Behavior factor from the village level have an effect on the number of cases of dengue fever in the city of Bandung. Factor from the sub-district level has no effect.
A Tutorial of the Poisson Random Field Model in Population Genetics
Directory of Open Access Journals (Sweden)
Praveen Sethupathy
2008-01-01
Full Text Available Population genetics is the study of allele frequency changes driven by various evolutionary forces such as mutation, natural selection, and random genetic drift. Although natural selection is widely recognized as a bona-fide phenomenon, the extent to which it drives evolution continues to remain unclear and controversial. Various qualitative techniques, or so-called “tests of neutrality”, have been introduced to detect signatures of natural selection. A decade and a half ago, Stanley Sawyer and Daniel Hartl provided a mathematical framework, referred to as the Poisson random field (PRF, with which to determine quantitatively the intensity of selection on a particular gene or genomic region. The recent availability of large-scale genetic polymorphism data has sparked widespread interest in genome-wide investigations of natural selection. To that end, the original PRF model is of particular interest for geneticists and evolutionary genomicists. In this article, we will provide a tutorial of the mathematical derivation of the original Sawyer and Hartl PRF model.
Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B
2009-02-01
Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy
El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul
2014-12-01
Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Kysely, Jan; Picek, Jan; Beranova, Romana; Plavcova, Eva
2014-05-01
The study compares statistical models for estimating high quantiles of daily temperatures based on the homogeneous and non-homogeneous Poisson process, and their applications in climate model simulations. Both types of the models make use of non-stationary peaks-over-threshold method and the Generalized Pareto distribution (GPD) for modelling extremes, but they differ in how the dependence of the model parameters on time index is captured. The homogeneous Poisson process model assumes that the intensity of the process is constant and the threshold used to delimit extremes changes with time; the non-homogeneous Poisson process assumes that the intensity of the process depends on time while the threshold is kept constant (Coles 2001). The model for time-dependency of the GPD parameters is selected according to the likelihood ratio test. Statistical arguments are provided to support the homogeneous Poisson process model, in which temporal dependence of the threshold is modelled in terms of regression quantiles (Kysely et al. 2010). Dependence of the results on the quantile chosen for the threshold (95-99%) is evaluated. The extreme value models are applied to analyse scenarios of changes in high quantiles of daily temperatures (20-yr and 100-yr return values) in transient simulations of several GCMs and RCMs for the 21st century. References: Coles S. (2001) An Introduction to Statistical Modeling of Extreme Values. Springer, 208 pp. Kysely J., Picek J., Beranova R. (2010) Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold. Global and Planetary Change, 72, 55-68.
Sharma, P; Mišković, Z L
2015-10-07
We present a model describing the electrostatic interactions across a structure that consists of a single layer of graphene with large area, lying above an oxide substrate of finite thickness, with its surface exposed to a thick layer of liquid electrolyte containing salt ions. Our goal is to analyze the co-operative screening of the potential fluctuation in a doped graphene due to randomness in the positions of fixed charged impurities in the oxide by the charge carriers in graphene and by the mobile ions in the diffuse layer of the electrolyte. In order to account for a possibly large potential drop in the diffuse later that may arise in an electrolytically gated graphene, we use a partially linearized Poisson-Boltzmann (PB) model of the electrolyte, in which we solve a fully nonlinear PB equation for the surface average of the potential in one dimension, whereas the lateral fluctuations of the potential in graphene are tackled by linearizing the PB equation about the average potential. In this way, we are able to describe the regime of equilibrium doping of graphene to large densities for arbitrary values of the ion concentration without restrictions to the potential drop in the electrolyte. We evaluate the electrostatic Green's function for the partially linearized PB model, which is used to express the screening contributions of the graphene layer and the nearby electrolyte by means of an effective dielectric function. We find that, while the screened potential of a single charged impurity at large in-graphene distances exhibits a strong dependence on the ion concentration in the electrolyte and on the doping density in graphene, in the case of a spatially correlated two-dimensional ensemble of impurities, this dependence is largely suppressed in the autocovariance of the fluctuating potential.
Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels.
Liu, Jinn-Liang; Eisenberg, Bob
2015-07-01
Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca(2+) may cause more unstable discrete Ca(2+) fluxes than that of monovalent Na(+). Two different methods-called the SMIB and multiscale methods-are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are
Che Awang, Aznida; Azah Samat, Nor
2017-09-01
Leptospirosis is a disease caused by the infection of pathogenic species from the genus of Leptospira. Human can be infected by the leptospirosis from direct or indirect exposure to the urine of infected animals. The excretion of urine from the animal host that carries pathogenic Leptospira causes the soil or water to be contaminated. Therefore, people can become infected when they are exposed to contaminated soil and water by cut on the skin as well as open wound. It also can enter the human body by mucous membrane such nose, eyes and mouth, for example by splashing contaminated water or urine into the eyes or swallowing contaminated water or food. Currently, there is no vaccine available for the prevention or treatment of leptospirosis disease but this disease can be treated if it is diagnosed early to avoid any complication. The disease risk mapping is important in a way to control and prevention of disease. Using a good choice of statistical model will produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for leptospirosis disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and Poisson-gamma model. This paper begins by providing a review of the SMR method and Poisson-gamma model, which we then applied to leptospirosis data of Kelantan, Malaysia. Both results are displayed and compared using graph, tables and maps. The result shows that the second method Poisson-gamma model produces better relative risk estimates compared to the SMR method. This is because the Poisson-gamma model can overcome the drawback of SMR where the relative risk will become zero when there is no observed leptospirosis case in certain regions. However, the Poisson-gamma model also faced problems where the covariate adjustment for this model is difficult and no possibility for allowing spatial correlation between risks in neighbouring areas. The problems of this model have
Yan, David
This thesis presents the one-dimensional equations, numerical method and simulations of a model to characterize the dynamical operation of an electrochemical cell. This model extends the current state-of-the art in that it accounts, in a primitive way, for the physics of the electrolyte/electrode interface and incorporates diffuse-charge dynamics, temperature coupling, surface coverage, and polarization phenomena. The one-dimensional equations account for a system with one or two mobile ions of opposite charge, and the electrode reaction we consider (when one is needed) is a one-electron electrodeposition reaction. Though the modeled system is far from representing a realistic electrochemical device, our results show a range of dynamics and behaviors which have not been observed previously, and explore the numerical challenges required when adding more complexity to a model. Furthermore, the basic transport equations (which are developed in three spatial dimensions) can in future accomodate the inclusion of additional physics, and coupling to more complex boundary conditions that incorporate two-dimensional surface phenomena and multi-rate reactions. In the model, the Poisson-Nernst-Planck equations are used to model diffusion and electromigration in an electrolyte, and the generalized Frumkin-Butler-Volmer equation is used to model reaction kinetics at electrodes. An energy balance equation is derived and coupled to the diffusion-migration equation. The model also includes dielectric polarization effects by introducing different values of the dielectric permittivity in different regions of the bulk, as well as accounting for surface coverage effects due to adsorption, and finite size "crowding", or steric effects. Advection effects are not modeled but could in future be incorporated. In order to solve the coupled PDE's, we use a variable step size second order scheme in time and finite differencing in space. Numerical tests are performed on a simplified system and
Liu, Jinn-Liang; Eisenberg, Bob
2018-02-01
The combinatorial explosion of empirical parameters in tens of thousands presents a tremendous challenge for extended Debye-Hückel models to calculate activity coefficients of aqueous mixtures of the most important salts in chemistry. The explosion of parameters originates from the phenomenological extension of the Debye-Hückel theory that does not take steric and correlation effects of ions and water into account. By contrast, the Poisson-Fermi theory developed in recent years treats ions and water molecules as nonuniform hard spheres of any size with interstitial voids and includes ion-water and ion-ion correlations. We present a Poisson-Fermi model and numerical methods for calculating the individual or mean activity coefficient of electrolyte solutions with any arbitrary number of ionic species in a large range of salt concentrations and temperatures. For each activity-concentration curve, we show that the Poisson-Fermi model requires only three unchanging parameters at most to well fit the corresponding experimental data. The three parameters are associated with the Born radius of the solvation energy of an ion in electrolyte solution that changes with salt concentrations in a highly nonlinear manner.
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
Directory of Open Access Journals (Sweden)
Dan S Bolintineanu
2009-01-01
Full Text Available Protegrin peptides are potent antimicrobial agents believed to act against a variety of pathogens by forming nonselective transmembrane pores in the bacterial cell membrane. We have employed 3D Poisson-Nernst-Planck (PNP calculations to determine the steady-state ion conduction characteristics of such pores at applied voltages in the range of -100 to +100 mV in 0.1 M KCl bath solutions. We have tested a variety of pore structures extracted from molecular dynamics (MD simulations based on an experimentally proposed octomeric pore structure. The computed single-channel conductance values were in the range of 290-680 pS. Better agreement with the experimental range of 40-360 pS was obtained using structures from the last 40 ns of the MD simulation, where conductance values range from 280 to 430 pS. We observed no significant variation of the conductance with applied voltage in any of the structures that we tested, suggesting that the voltage dependence observed experimentally is a result of voltage-dependent channel formation rather than an inherent feature of the open pore structure. We have found the pore to be highly selective for anions, with anionic to cationic current ratios (I(Cl-/I(K+ on the order of 10(3. This is consistent with the highly cationic nature of the pore but surprisingly in disagreement with the experimental finding of only slight anionic selectivity. We have additionally tested the sensitivity of our PNP model to several parameters and found the ion diffusion coefficients to have a significant influence on conductance characteristics. The best agreement with experimental data was obtained using a diffusion coefficient for each ion set to 10% of the bulk literature value everywhere inside the channel, a scaling used by several other studies employing PNP calculations. Overall, this work presents a useful link between previous work focused on the structure of protegrin pores and experimental efforts aimed at investigating their
Directory of Open Access Journals (Sweden)
E.O. Ulloa-Dávila
2017-12-01
Full Text Available An approximate analytical solution to the fluctuation potential problem in the modified Poisson-Boltzmann theory of electrolyte solutions in the restricted primitive model is presented. The solution is valid for all inter-ionic distances, including contact values. The fluctuation potential solution is implemented in the theory to describe the structure of the electrolyte in terms of the radial distribution functions, and to calculate some aspects of thermodynamics, viz., configurational reduced energies, and osmotic coefficients. The calculations have been made for symmetric valence 1:1 systems at the physical parameters of ionic diameter 4.25·10^{-10} m, relative permittivity 78.5, absolute temperature 298 K, and molar concentrations 0.1038, 0.425, 1.00, and 1.968. Radial distribution functions are compared with the corresponding results from the symmetric Poisson-Boltzmann, and the conventional and modified Poisson-Boltzmann theories. Comparisons have also been done for the contact values of the radial distributions, reduced configurational energies, and osmotic coefficients as functions of electrolyte concentration. Some Monte Carlo simulation data from the literature are also included in the assessment of the thermodynamic predictions. Results show a very good agreement with the Monte Carlo results and some improvement for osmotic coefficients and radial distribution functions contact values relative to these theories. The reduced energy curve shows excellent agreement with Monte Carlo data for molarities up to 1 mol/dm^3.
Wang, Yiyi; Kockelman, Kara M
2013-11-01
This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
XRLSim model specifications and user interfaces
Energy Technology Data Exchange (ETDEWEB)
Young, K.D.; Breitfeller, E.; Woodruff, J.P.
1989-12-01
The two chapters in this manual document the engineering development leading to modification of XRLSim -- an Ada-based computer program developed to provide a realistic simulation of an x-ray laser weapon platform. Complete documentation of the FY88 effort to develop XRLSim was published in April 1989, as UCID-21736:XRLSIM Model Specifications and User Interfaces, by L. C. Ng, D. T. Gavel, R. M. Shectman. P. L. Sholl, and J. P. Woodruff. The FY89 effort has been primarily to enhance the x-ray laser weapon-platform model fidelity. Chapter 1 of this manual details enhancements made to XRLSim model specifications during FY89. Chapter 2 provides the user with changes in user interfaces brought about by these enhancements. This chapter is offered as a series of deletions, replacements, and insertions to the original document to enable XRLSim users to implement enhancements developed during FY89.
Kyllingsbæk, Søren; Markussen, Bo; Bundesen, Claus
2012-06-01
The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is continued until the stimulus disappears, and the overt response is based on the categorization made the greatest number of times. The model was evaluated by Monte Carlo tests of goodness of fit against observed probability distributions of responses in two extensive experiments and also by quantifications of the information loss of the model compared with the observed data by use of information theoretic measures. The model provided a close fit to individual data on identification of digits and an apparently perfect fit to data on identification of Landolt rings.
HTGR Application Economic Model Users' Manual
International Nuclear Information System (INIS)
Gandrik, A.M.
2012-01-01
The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.
Energy Technology Data Exchange (ETDEWEB)
A.M. Gandrik
2012-01-01
The High Temperature Gas-Cooler Reactor (HTGR) Cost Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Cost Model calculates an estimate of the capital costs, annual operating and maintenance costs, and decommissioning costs for a high-temperature gas-cooled reactor. The user can generate these costs for multiple reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for a single or four-pack configuration; and for a reactor size of 350 or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Cost Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Cost Model. This model was design for users who are familiar with the HTGR design and Excel. Modification of the HTGR Cost Model should only be performed by users familiar with Excel and Visual Basic.
Parallel community climate model: Description and user`s guide
Energy Technology Data Exchange (ETDEWEB)
Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others
1996-07-15
This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
Poisson integrators for Lie-Poisson structures on R3
International Nuclear Information System (INIS)
Song Lina
2011-01-01
This paper is concerned with the study of Poisson integrators. We are interested in Lie-Poisson systems on R 3 . First, we focus on Poisson integrators for constant Poisson systems and the transformations used for transforming Lie-Poisson structures to constant Poisson structures. Then, we construct local Poisson integrators for Lie-Poisson systems on R 3 . Finally, we present the results of numerical experiments for two Lie-Poisson systems and compare our Poisson integrators with other known methods.
On the Linear Stability of Crystals in the Schrödinger-Poisson Model
Komech, A.; Kopylova, E.
2016-10-01
We consider the Schrödinger-Poisson-Newton equations for crystals with one ion per cell. We linearize this dynamics at the periodic minimizers of energy per cell and introduce a novel class of the ion charge densities that ensures the stability of the linearized dynamics. Our main result is the energy positivity for the Bloch generators of the linearized dynamics under a Wiener-type condition on the ion charge density. We also adopt an additional `Jellium' condition which cancels the negative contribution caused by the electrostatic instability and provides the `Jellium' periodic minimizers and the optimality of the lattice: the energy per cell of the periodic minimizer attains the global minimum among all possible lattices. We show that the energy positivity can fail if the Jellium condition is violated, while the Wiener condition holds. The proof of the energy positivity relies on a novel factorization of the corresponding Hamilton functional. The Bloch generators are nonselfadjoint (and even nonsymmetric) Hamilton operators. We diagonalize these generators using our theory of spectral resolution of the Hamilton operators with positive definite energy (Komech and Kopylova in, J Stat Phys 154(1-2):503-521, 2014, J Spectral Theory 5(2):331-361, 2015). The stability of the linearized crystal dynamics is established using this spectral resolution.
Islam, Mohammad Mafijul; Alam, Morshed; Tariquzaman, Md; Kabir, Mohammad Alamgir; Pervin, Rokhsona; Begum, Munni; Khan, Md Mobarak Hossain
2013-01-08
Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance variable namely mother's education, father's education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh.
Modeling Users' Experiences with Interactive Systems
Karapanos, Evangelos
2013-01-01
Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences. This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...
GEOS-5 Chemistry Transport Model User's Guide
Kouatchou, J.; Molod, A.; Nielsen, J. E.; Auer, B.; Putman, W.; Clune, T.
2015-01-01
The Goddard Earth Observing System version 5 (GEOS-5) General Circulation Model (GCM) makes use of the Earth System Modeling Framework (ESMF) to enable model configurations with many functions. One of the options of the GEOS-5 GCM is the GEOS-5 Chemistry Transport Model (GEOS-5 CTM), which is an offline simulation of chemistry and constituent transport driven by a specified meteorology and other model output fields. This document describes the basic components of the GEOS-5 CTM, and is a user's guide on to how to obtain and run simulations on the NCCS Discover platform. In addition, we provide information on how to change the model configuration input files to meet users' needs.
Revised user's guide to the 'DISPOSALS' model
International Nuclear Information System (INIS)
Laundy, R.S.; James, A.R.; Groom, M.S.; LeJeune, S.R.
1985-04-01
This report provides a User's Guide to the 'DISPOSALS' computer model and includes instructions on how to set up and run a specific problem together with details of the scope, theoretical basis, data requirements and capabilities of the model. The function of the 'DISPOSALS' model is to make assignments of nuclear waste material in an optimum manner to a number of disposal sites each subject to a number of constraints such as limits on the volume and activity. The user is able to vary the number of disposal sites, the range and limits of the constraints to be applied to each disposal site and the objective function for optimisation. The model is based on the Linear Programming technique and uses CAP Scientific's LAMPS and MAGIC packages. Currently the model has been implemented on CAP Scientific's VAX 11/750 minicomputer. (author)
User's guide to the 'DISPOSALS' model
International Nuclear Information System (INIS)
Groom, M.S.; James, A.R.; Laundy, R.S.
1984-03-01
This report provides a User's Guide to the 'DISPOSALS' computer model and includes instructions on how to set up and run a specific problem together with details of the scope, theoretical basis, data requirements and capabilities of the model. The function of the 'DISPOSALS' model is to make assignments of nuclear waste material in an optimum manner to a number of disposal sites each subject to a number of constraints such as limits on the volume and activity. The user is able to vary the number of disposal sites, the range and limits of the constraints to be applied to each disposal site and the objective function for optimisation. The model is based on the Linear Programming technique and uses CAP Scientific's LAMPS and MAGIC packages. Currently the model has been implemented on CAP Scientific's VAX 11/750 minicomputer. (author)
Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.
2018-04-01
Sea level rise has already caused more frequent and severe coastal flooding and this trend will likely continue. Flood prediction is an essential part of a coastal city's capacity to adapt to and mitigate this growing problem. Complex coastal urban hydrological systems however, do not always lend themselves easily to physically-based flood prediction approaches. This paper presents a method for using a data-driven approach to estimate flood severity in an urban coastal setting using crowd-sourced data, a non-traditional but growing data source, along with environmental observation data. Two data-driven models, Poisson regression and Random Forest regression, are trained to predict the number of flood reports per storm event as a proxy for flood severity, given extensive environmental data (i.e., rainfall, tide, groundwater table level, and wind conditions) as input. The method is demonstrated using data from Norfolk, Virginia USA from September 2010 to October 2016. Quality-controlled, crowd-sourced street flooding reports ranging from 1 to 159 per storm event for 45 storm events are used to train and evaluate the models. Random Forest performed better than Poisson regression at predicting the number of flood reports and had a lower false negative rate. From the Random Forest model, total cumulative rainfall was by far the most dominant input variable in predicting flood severity, followed by low tide and lower low tide. These methods serve as a first step toward using data-driven methods for spatially and temporally detailed coastal urban flood prediction.
Internet User Behaviour Model Discovery Process
Directory of Open Access Journals (Sweden)
2007-01-01
Full Text Available The Academy of Economic Studies has more than 45000 students and about 5000 computers with Internet access which are connected to AES network. Students can access internet on these computers through a proxy server which stores information about the way the Internet is accessed. In this paper, we describe the process of discovering internet user behavior models by analyzing proxy server raw data and we emphasize the importance of such models for the e-learning environment.
Wake Vortex Inverse Model User's Guide
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input
The Poisson aggregation process
International Nuclear Information System (INIS)
Eliazar, Iddo
2016-01-01
In this paper we introduce and analyze the Poisson Aggregation Process (PAP): a stochastic model in which a random collection of random balls is stacked over a general metric space. The scattering of the balls’ centers follows a general Poisson process over the metric space, and the balls’ radii are independent and identically distributed random variables governed by a general distribution. For each point of the metric space, the PAP counts the number of balls that are stacked over it. The PAP model is a highly versatile spatial counterpart of the temporal M/G/∞ model in queueing theory. The surface of the moon, scarred by circular meteor-impact craters, exemplifies the PAP model in two dimensions: the PAP counts the number of meteor-impacts that any given moon-surface point sustained. A comprehensive analysis of the PAP is presented, and the closed-form results established include: general statistics, stationary statistics, short-range and long-range dependencies, a Central Limit Theorem, an Extreme Limit Theorem, and fractality.
Plan recognition in modelling of users
International Nuclear Information System (INIS)
Hollnagel, E.
1988-01-01
In order for an Intelligent Decision Support System to interact properly with a user, it must know what the user is doing. Accident Sequence Modelling (ASM) provides a possible frame of reference for monitoring operator activities, but it cannot be used directly: (1) operators may deviate from the scenario described in ASM, (2) the actual situation may develop differently from the scenario, (3) operators are normally involved in several activities at the same time, and (4) modelling of operator activities must focus on the level of individual actions, while the ASM only addresses the global view. The reference provided by the ASM scenario must therefore be supplemented by a more direct modelling of what the operator does. This requires a recognition of the operator's current plans, i.e. his goals and the strategies he employs to reach them. The paper describes a programme to develop an expert system that does this, within the ESPRIT project Graphical Dialogue Environment. (author)
Chavanis, P H; Delfini, L
2014-03-01
We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation.
Minois, Nathan; Lauwers-Cances, Valérie; Savy, Stéphanie; Attal, Michel; Andrieu, Sandrine; Anisimov, Vladimir; Savy, Nicolas
2017-10-15
At the design of clinical trial operation, a question of a paramount interest is how long it takes to recruit a given number of patients. Modelling the recruitment dynamics is the necessary step to answer this question. Poisson-gamma model provides very convenient, flexible and realistic approach. This model allows predicting the trial duration using data collected at an interim time with very good accuracy. A natural question arises: how to evaluate the parameters of recruitment model before the trial begins? The question is harder to handle as there are no recruitment data available for this trial. However, if there exist similar completed trials, it is appealing to use data from these trials to investigate feasibility of the recruitment process. In this paper, the authors explore the recruitment data of two similar clinical trials (Intergroupe Francais du Myélome 2005 and 2009). It is shown that the natural idea of plugging the historical rates estimated from the completed trial in the same centres of the new trial for predicting recruitment is not a relevant strategy. In contrast, using the parameters of a gamma distribution of the rates estimated from the completed trial in the recruitment dynamic model of the new trial provides reasonable predictive properties with relevant confidence intervals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Grøn, Randi; Gerds, Thomas A; Andersen, Per K
2016-03-30
Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.
Solid Waste Projection Model: Model user's guide
International Nuclear Information System (INIS)
Stiles, D.L.; Crow, V.L.
1990-08-01
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford company (WHC) specifically to address solid waste management issues at the Hanford Central Waste Complex (HCWC). This document, one of six documents supporting the SWPM system, contains a description of the system and instructions for preparing to use SWPM and operating Version 1 of the model. 4 figs., 1 tab
Garcia, Jane Bernadette Denise M.; Esguerra, Jose Perico H.
2017-08-01
An approximate but closed-form expression for a Poisson-like steady state wealth distribution in a kinetic model of gambling was formulated from a finite number of its moments, which were generated from a βa,b(x) exchange distribution. The obtained steady-state wealth distributions have tails which are qualitatively similar to those observed in actual wealth distributions.
The asymptotic stability analysis in stochastic logistic model with Poisson growth coefficient
Directory of Open Access Journals (Sweden)
Shaojuan Ma
2014-01-01
Full Text Available The asymptotic stability of a discrete logistic model with random growth coefficient is studied in this paper. Firstly, the discrete logistic model with random growth coefficient is built and reduced into its deterministic equivalent system by orthogonal polynomial approximation. Then, the linear stability theory and the Jury criterion of nonlinear deterministic discrete systems are applied to the equivalent one. At last, by mathematical analysis, we find that the parameter interval for asymptotic stability of nontrivial equilibrium in stochastic logistic system gets smaller as the random intensity or statistical parameters of random variable is increased and the random parameter's influence on asymptotic stability in stochastic logistic system becomes prominent.
Comparison of two generation-recombination terms in the Poisson-Nernst-Planck model
Energy Technology Data Exchange (ETDEWEB)
Lelidis, I. [Solid State Section, Department of Physics, University of Athens, Panepistimiopolis, Zografos, Athens 157 84 (Greece); Department of Applied Science and Technology, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino (Italy); Universite de Picardie Jules Verne, Laboratoire de Physique des Systemes Complexes, 33 rue Saint-Leu 80039, Amiens (France); Barbero, G. [Department of Applied Science and Technology, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino (Italy); Sfarna, A. [Solid State Section, Department of Physics, University of Athens, Panepistimiopolis, Zografos, Athens 157 84 (Greece)
2012-10-21
Two phenomenological forms proposed to take into account the generation-recombination phenomenon of ions are investigated. The first form models the phenomenon as a chemical reaction, containing two coefficients describing the dissociation of neutral particles in ions, and the recombination of ions to give neutral particles. The second form is based on the assumption that in thermodynamical equilibrium, a well-defined density of ions is stable. Any deviation from the equilibrium density gives rise to a source term proportional to the deviation, whose phenomenological coefficient plays the role of a life time. The analysis is performed by evaluating the electrical response of an electrolytic cell to an external stimulus for both forms. For simplicity we assume that the electrodes are blocking, that there is only a group of negative and positive ions, and that the negative ions are immobile. For the second form, two cases are considered: (i) the generation-recombination phenomenon is due to an intrinsic mechanism, and (ii) the production of ions is triggered by an external source of energy, as in a solar cell. We show that the predictions of the two models are different at the impedance as well as at the admittance level. In particular, the first model predicts the existence of two plateaux for the real part of the impedance, whereas the second one predicts just one. It follows that impedance spectroscopy measurements could give information on the model valid for the generation-recombination of ions.
A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space
Directory of Open Access Journals (Sweden)
Jinjun Li
2011-01-01
Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.
Directory of Open Access Journals (Sweden)
Fabyano Fonseca Silva
2011-01-01
Full Text Available Nowadays, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr x Holstein population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable.
Jones, Bleddyn
2009-06-01
Current technical radiotherapy advances aim to (a) better conform the dose contours to cancers and (b) reduce the integral dose exposure and thereby minimise unnecessary dose exposure to normal tissues unaffected by the cancer. Various types of conformal and intensity modulated radiotherapy (IMRT) using x-rays can achieve (a) while charged particle therapy (CPT)-using proton and ion beams-can achieve both (a) and (b), but at greater financial cost. Not only is the long term risk of radiation related normal tissue complications important, but so is the risk of carcinogenesis. Physical dose distribution plans can be generated to show the differences between the above techniques. IMRT is associated with a dose bath of low to medium dose due to fluence transfer: dose is effectively transferred from designated organs at risk to other areas; thus dose and risk are transferred. Many clinicians are concerned that there may be additional carcinogenesis many years after IMRT. CPT reduces the total energy deposition in the body and offers many potential advantages in terms of the prospects for better quality of life along with cancer cure. With C ions there is a tail of dose beyond the Bragg peaks, due to nuclear fragmentation; this is not found with protons. CPT generally uses higher linear energy transfer (which varies with particle and energy), which carries a higher relative risk of malignant induction, but also of cell death quantified by the relative biological effect concept, so at higher dose levels the frank development of malignancy should be reduced. Standard linear radioprotection models have been used to show a reduction in carcinogenesis risk of between two- and 15-fold depending on the CPT location. But the standard risk models make no allowance for fractionation and some have a dose limit at 4 Gy. Alternatively, tentative application of the linear quadratic model and Poissonian statistics to chromosome breakage and cell kill simultaneously allows estimation of
An Ontology-Based Framework for Modeling User Behavior
DEFF Research Database (Denmark)
Razmerita, Liana
2011-01-01
This paper focuses on the role of user modeling and semantically enhanced representations for personalization. This paper presents a generic Ontology-based User Modeling framework (OntobUMf), its components, and its associated user modeling processes. This framework models the behavior of the users...... and classifies its users according to their behavior. The user ontology is the backbone of OntobUMf and has been designed according to the Information Management System Learning Information Package (IMS LIP). The user ontology includes a Behavior concept that extends IMS LIP specification and defines....... The results of this research may contribute to the development of other frameworks for modeling user behavior, other semantically enhanced user modeling frameworks, or other semantically enhanced information systems....
Starrfelt, Jostein; Liow, Lee Hsiang
2016-04-05
The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. © 2016 The Authors.
Directory of Open Access Journals (Sweden)
Mahsa Saadati
2012-01-01
Full Text Available Background: Epilepsy is a common, chronic neurological disorder that affects more than 40 million people worldwide. Epilepsy is characterized by interictal and ictal functional disturbances. The presence of interictal epileptiform discharges (IEDs can help to confirm a clinical diagnosis of epilepsy, and their location and characteristics can help to identify the epileptogenic zone or suggest a particular epilepsy syndrome. The aim of this study is to determine the factors that affect IEDs. Materials and Methods: Poisson marginal model was done on 60 epileptic patients who were referred to Shefa Neurological Research Center, Tehran, for Video-Electroencephalogram (V-EEG monitoring from 2007 to 2011. The frequency of IEDs was assessed by visual analysis of interictal EEG samples for 2 h. Results: The results show that among age, epilepsy duration, gender, seizure frequency and two common anti-epileptic drugs (Valproic acid and Carbamazepine, only age and epilepsy duration had statistical significant effect on IED frequency. Conclusion: Investigating the factors affecting IED is not only of theoretical importance, but may also have clinical relevance as understanding the evolution of interictal epileptogenesis may lead to the development of therapeutic interventions. Generalized estimating equation is a valid statistical technique for studying factors that affect on IED. This research demonstrates epilepsy duration has positive and age has negative effect on IED which means that IED increases with epilepsy duration and decreases with increasing age. So for monitoring IED, we should consider both age and epilepsy duration of each patient.
Insider safeguards effectiveness model (ISEM). User's guide
International Nuclear Information System (INIS)
Boozer, D.D.; Engi, D.
1977-11-01
A comprehensive presentation of the ISEM computer program is provided. ISEM was designed to evaluate the effectiveness of a fixed-site facility safeguards system in coping with the theft, sabotage, or dispersal of radiological material by a single person who has authorized access to the facility. This insider may be aided by a group of insiders who covertly degrade sensor systems. Each ISEM run evaluates safeguards system performance for a particular scenario specified by the user. The dispatching of guards following alarms and their interaction with the insider are explicitly treated by the model
Combination of Bayesian Network and Overlay Model in User Modeling
Directory of Open Access Journals (Sweden)
Loc Nguyen
2009-12-01
Full Text Available The core of adaptive system is user model containing personal information such as knowledge, learning styles, goals… which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, plan recognition… but they don’t bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidences collected during user’s learning process.
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
Homogeneous Poisson structures
International Nuclear Information System (INIS)
Shafei Deh Abad, A.; Malek, F.
1993-09-01
We provide an algebraic definition for Schouten product and give a decomposition for any homogenenous Poisson structure in any n-dimensional vector space. A large class of n-homogeneous Poisson structures in R k is also characterized. (author). 4 refs
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Ahdika, Atina; Lusiyana, Novyan
2017-02-01
World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).
Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds
Martínez-Torres, David; Miranda, Eva
2018-01-01
We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.
International Nuclear Information System (INIS)
Harwood, L.H.
1981-01-01
At MSU we have used the POISSON family of programs extensively for magnetic field calculations. In the presently super-saturated computer situation, reducing the run time for the program is imperative. Thus, a series of modifications have been made to POISSON to speed up convergence. Two of the modifications aim at having the first guess solution as close as possible to the final solution. The other two aim at increasing the convergence rate. In this discussion, a working knowledge of POISSON is assumed. The amount of new code and expected time saving for each modification is discussed
A condensed review of the intelligent user modeling of information retrieval system
International Nuclear Information System (INIS)
Choi, Kwang
2001-10-01
This study discussed theoretical aspects of user modeling, modeling cases of commecial systems and elements that need consideration when constructing user models. The results of this study are 1) Comprehensive and previous analysis of system users is required to bulid user model. 2) User information is collected from users directly and inference. 3) Frame structure is compatible to build user model. 4) Prototype user model is essential to bulid a user model and based on previous user analysis. 5) User model builder has interactive information collection, inference, flexibility, model updating functions. 6) User model builder has to reflect user's feedback
Scaling the Poisson Distribution
Farnsworth, David L.
2014-01-01
We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.
On Poisson Nonlinear Transformations
Directory of Open Access Journals (Sweden)
Nasir Ganikhodjaev
2014-01-01
Full Text Available We construct the family of Poisson nonlinear transformations defined on the countable sample space of nonnegative integers and investigate their trajectory behavior. We have proved that these nonlinear transformations are regular.
Extended Poisson Exponential Distribution
Directory of Open Access Journals (Sweden)
Anum Fatima
2015-09-01
Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.
Franceries, X.; Doyon, B.; Chauveau, N.; Rigaud, B.; Celsis, P.; Morucci, J.-P.
2003-03-01
In electroencephalography (EEG) and event related potentials (ERP), localizing the electrical sources at the origin of scalp potentials (inverse problem) imposes, in a first step, the computation of scalp potential distribution from the simulation of sources (forward problem). This article proposes an alternative method for mimicing both the electrical and geometrical properties of the head, including brain, skull, and scalp tissue with resistors. Two resistor mesh models have been designed to reproduce the three-sphere reference model (analytical model). The first one (spherical resistor mesh) closely mimics the geometrical and electrical properties of the analytical model. The second one (cubic resistor mesh) is designed to conveniently handle anatomical data from magnetic resonance imaging. Both models have been validated, in reference to the analytical solution calculated on the three-sphere model, by computing the magnification factor and the relative difference measure. Results suggest that the mesh models can be used as robust and user-friendly simulation or exploration tools in EEG/ERP.
Poisson branching point processes
International Nuclear Information System (INIS)
Matsuo, K.; Teich, M.C.; Saleh, B.E.A.
1984-01-01
We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers
Lipsitz, Stuart R.; Parzen, Michael; Molenberghs, Geert
1998-01-01
This article describes estimation of the cell probabilities in an R x C contingency table with ignorable missing data. Popular methods for maximizing the incomplete data likelihood are the EM-algorithm and the Newton--Raphson algorithm. Both of these methods require some modification of existing statistical software to get the MLEs of the cell probabilities as well as the variance estimates. We make the connection between the multinomial and Poisson likelihoods to show that the MLEs can be ob...
A user experience model for tangible interfaces for children
Reidsma, Dennis; van Dijk, Elisabeth M.A.G.; van der Sluis, Frans; Volpe, G; Camurri, A.; Perloy, L.M.; Nijholt, Antinus
2015-01-01
Tangible user interfaces allow children to take advantage of their experience in the real world when interacting with digital information. In this paper we describe a model for tangible user interfaces specifically for children that focuses mainly on the user experience during interaction and on how
Morrissey, M. L.
2009-12-01
A point process model for tropical rain rates is developed through the derivation of the third moment expression for a combined point process model. The model is a superposition of a Neyman-Scott rectangular pulse model and a Poisson white noise process model. The model is scalable in the temporal dimension. The derivation of the third moment for this model allows the inclusion of the skewness parameter which is necessary to adequately represent rainfall intensity. Analysis of the model fit to tropical tipping bucket raingauge data ranging in temporal scale from 5 minutes to one day indicates that it can adequately produce synthesized rainfall having the statistical characteristics of rain rate over the range of scales tested. Of special interest is the model’s capability to accurately preserve the probability of extreme tropical rain rates at different scales. In addition to various hydrological applications, the model also has many potential uses in the field of meteorology, such as the study and development of radar rain rate algorithms for the tropics which need to parameterized attenuation due to heavy rain.
Towards a new Role of Agent Technology in User Modelling
Lorenz, A.
2003-01-01
This paper discusses resent attempts to employ multi-agent technologies for user modelling purposes. Based on the analysis of recent implemented systems, this contribution provides a general agent definition representing a flexible implementation to employ highly specialized entities for user modelling tasks, and illustrates communication and cooperation approaches. In the overall solution, agent teams cooperate to fulfill the requirements of user modelling in a more appropriate way.
User-oriented and cognitive models of information retrieval
DEFF Research Database (Denmark)
Järvelin, Kalervo; Ingwersen, Peter
2010-01-01
The domain of user-oriented and cognitive IR is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain. The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical applications. Several models wit...
Modeling and clustering users with evolving profiles in usage streams
Zhang, Chongsheng
2012-09-01
Today, there is an increasing need of data stream mining technology to discover important patterns on the fly. Existing data stream models and algorithms commonly assume that users\\' records or profiles in data streams will not be updated or revised once they arrive. Nevertheless, in various applications such asWeb usage, the records/profiles of the users can evolve along time. This kind of streaming data evolves in two forms, the streaming of tuples or transactions as in the case of traditional data streams, and more importantly, the evolving of user records/profiles inside the streams. Such data streams bring difficulties on modeling and clustering for exploring users\\' behaviors. In this paper, we propose three models to summarize this kind of data streams, which are the batch model, the Evolving Objects (EO) model and the Dynamic Data Stream (DDS) model. Through creating, updating and deleting user profiles, these models summarize the behaviors of each user as a profile object. Based upon these models, clustering algorithms are employed to discover interesting user groups from the profile objects. We have evaluated all the proposed models on a large real-world data set, showing that the DDS model summarizes the data streams with evolving tuples more efficiently and effectively, and provides better basis for clustering users than the other two models. © 2012 IEEE.
Mani, Prashant; Tyagi, Chandra Shekhar; Srivastav, Nishant
2016-03-01
In this paper the analytical solution of the 2D Poisson's equation for single gate Fully Depleted SOI (FDSOI) MOSFET's is derived by using a Green's function solution technique. The surface potential is calculated and the threshold voltage of the device is minimized for the low power consumption. Due to minimization of threshold voltage the short channel effect of device is suppressed and after observation we obtain the device is kink free. The structure and characteristics of SingleGate FDSOI MOSFET were matched by using MathCAD and silvaco respectively.
Macro System Model (MSM) User Guide, Version 1.3
Energy Technology Data Exchange (ETDEWEB)
Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.
2011-09-01
This user guide describes the macro system model (MSM). The MSM has been designed to allow users to analyze the financial, environmental, transitional, geographical, and R&D issues associated with the transition to a hydrogen economy. Basic end users can use the MSM to answer cross-cutting questions that were previously difficult to answer in a consistent and timely manner due to various assumptions and methodologies among different models.
User-Oriented and Cognitive Models of Information Retrieval
DEFF Research Database (Denmark)
Ingwersen, Peter; Järvelin, Kalervo; Skov, Mette
2017-01-01
The domain of user-oriented and cognitive information retrieval (IR) is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain. The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical...
Modeling characteristics of location from user photos
V. Kumar (Vikas); S. Bakhshi (Saeideh); L. Kennedy (Lyndon); D.A. Shamma (David)
2017-01-01
textabstractIn the past decade, location-based services have grown through geo-tagging and place-tagging. Proliferation of GPS-enabled mobile devices further enabled exponential growth in geotagged user content. On the other hand, location-based applications harness the abundance of geo-tagged
Modeling User Behavior and Attention in Search
Huang, Jeff
2013-01-01
In Web search, query and click log data are easy to collect but they fail to capture user behaviors that do not lead to clicks. As search engines reach the limits inherent in click data and are hungry for more data in a competitive environment, mining cursor movements, hovering, and scrolling becomes important. This dissertation investigates how…
AMEM-ADL Polymer Migration Estimation Model User's Guide
The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.
A New User Segmentation Model for E-Government
Ran Tang; Zhenji Zhang; Xiaolan Guan; Lida Wang
2013-01-01
E-government in China has entered the development stage of personalized services, and user segmentation has become an urgent demand. On the basis of systematic interpretation of e-government development stages, in this article, the authors introduce CRM and customer segmentation concept into e-government areas, construct e-government user segmentation model, and obtain user segmentation results by empirical analysis. Comparing with existing segmentation methods based on experience, because of...
Model driven development of user interface prototypes
DEFF Research Database (Denmark)
Störrle, Harald
2010-01-01
the whole UI development life cycle, connect all stakeholders involved, and support a wide range of levels of granularity and abstraction. This is achieved by using Window/Event-Diagrams (WEDs), a UI specification notation based on UML 2 state machines. It affords closer collaboration between different user......Many approaches to interface development apply only to isolated aspects of the development of user interfaces (UIs), e.g., exploration during the early phases, design of visual appearance, or implementation in some technology. In this paper we explore an _integrated_ approach to incorporate...... groups like graphic designers and software developers by integrating traditional pen-and-paper based methods with contemporary MDA-based CASE tools. We have implemented our approach in the Advanced Interaction Design Environemnt (AIDE), an application to support WEDs....
End Users and ERP Systems� Success. Three Models
Directory of Open Access Journals (Sweden)
Gianina MIHAI
2017-06-01
Full Text Available Information systems (IS have an enormous impact on organizations, individual work, and performance in general. As a result, many research works in the field of IS are focused on the interrelationship between individual performance and IS performance. During the last 20 to 30 years many models have been developed and tested by researchers. Their main objective was to investigate IS success and user performance in different environments. Therefore, a number of models appeared, their goal being the studying of the success, usefulness, end user adoption and utilization of IS, and other user and IS-related aspects in different organizations. This research paper presents three of the most important models developed in specialized literature, which deal with measuring IS success and end user adoption of the IS: the TAM model, the D&M model, and the TTF model. The research also provides an overview of some studies that have applied these models in the field of ERP systems.
Artificial intelligence techniques for modeling database user behavior
Tanner, Steve; Graves, Sara J.
1990-01-01
The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.
VMTL: a language for end-user model transformation
DEFF Research Database (Denmark)
Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel
2016-01-01
Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently, these langua......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently......, these languages are largely ill-equipped for adoption by end-user modelers in areas such as requirements engineering, business process management, or enterprise architecture. We aim to introduce a model transformation language addressing the skills and requirements of end-user modelers. With this contribution, we...... hope to broaden the application scope of model transformation and MDE technology in general. We discuss the profile of end-user modelers and propose a set of design guidelines for model transformation languages addressing them. We then introduce Visual Model Transformation Language (VMTL) following...
hiv prevention among drug and alcohol users: models of ...
African Journals Online (AJOL)
Administrator
HIV PREVENTION AMONG DRUG AND ALCOHOL USERS: MODELS. OF INTERVENTION IN KENYA. Clement S. Deveau. Academy for Educational Development (AED). Capable Partners Program (CAP). Nairobi, Kenya. ABSTRACT. The spread of HIV among drug and alcohol users, as a high-risk group, is a significant ...
Site Structure and User Navigation: Models, Measures and Methods
Herder, E.; van Dijk, Elisabeth M.A.G.; Chen, S.Y; Magoulas, G.D.
2004-01-01
The analysis of the structure of Web sites and patterns of user navigation through these sites is gaining attention from different disciplines, as it enables unobtrusive discovery of user needs. In this chapter we give an overview of models, measures, and methods that can be used for analysis
Model driven development of user interface prototypes
DEFF Research Database (Denmark)
Störrle, Harald
2010-01-01
the whole UI development life cycle, connect all stakeholders involved, and support a wide range of levels of granularity and abstraction. This is achieved by using Window/Event-Diagrams (WEDs), a UI specification notation based on UML 2 state machines. It affords closer collaboration between different user...... groups like graphic designers and software developers by integrating traditional pen-and-paper based methods with contemporary MDA-based CASE tools. We have implemented our approach in the Advanced Interaction Design Environemnt (AIDE), an application to support WEDs....
Calame, Jeffrey; Chernyavskiy, Igor; Ancona, Mario; Meyer, David
Polarization-gradient profiling of AlxGa1-xN/GaN heterostructures in the vertical (depth) direction, achieved by deliberate spatial tailoring of the aluminum concentration profile, can be used to control the spatial structure of the conducting electron gas in high electron mobility transistors. In particular, the typical two-dimensional electron gas of abrupt heterostructures can exhibit a more three-dimensional distribution in graded structures. This offers the possibility of improved device linearity through deliberate vertical heterostructure engineering, which can minimize or compensate for various scattering mechanisms that contribute to nonlinearity. Schrodinger-Poisson modeling (i.e., the Hartree approximation) is used to study the electron density profiles that result from such deliberate grading, and how those profiles evolve with the application of biasing vertical electric fields across the heterostructure. Implications of the results on device linearity will be discussed. Comparisons between the electron density profiles predicted by the Schrodinger-Poisson modeling and those obtained by density-gradient theory will be made in selected examples. Work supported by the U.S. Office of Naval Research.
DEFF Research Database (Denmark)
Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just
2002-01-01
In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...
User experience & user-centered design : Health games user research model
Jef Folkerts
2014-01-01
This poster sketches the outlines of a theoretical research framework to assess whether and on what grounds certain behavioral effects may be attributed to particular game mechanics and game play aspects. It is founded on the Elaboration Likelihood Model of Persuasion (ELM), which is quite
Wang, Chenggang; Jiang, Baofa; Fan, Jingchun; Wang, Furong; Liu, Qiyong
2014-01-01
The aim of this study is to develop a model that correctly identifies and quantifies the relationship between dengue and meteorological factors in Guangzhou, China. By cross-correlation analysis, meteorological variables and their lag effects were determined. According to the epidemic characteristics of dengue in Guangzhou, those statistically significant variables were modeled by a zero-inflated Poisson regression model. The number of dengue cases and minimum temperature at 1-month lag, along with average relative humidity at 0- to 1-month lag were all positively correlated with the prevalence of dengue fever, whereas wind velocity and temperature in the same month along with rainfall at 2 months' lag showed negative association with dengue incidence. Minimum temperature at 1-month lag and wind velocity in the same month had a greater impact on the dengue epidemic than other variables in Guangzhou.
Research on potential user identification model for electric energy substitution
Xia, Huaijian; Chen, Meiling; Lin, Haiying; Yang, Shuo; Miao, Bo; Zhu, Xinzhi
2018-01-01
The implementation of energy substitution plays an important role in promoting the development of energy conservation and emission reduction in china. Energy service management platform of alternative energy users based on the data in the enterprise production value, product output, coal and other energy consumption as a potential evaluation index, using principal component analysis model to simplify the formation of characteristic index, comprehensive index contains the original variables, and using fuzzy clustering model for the same industry user’s flexible classification. The comprehensive index number and user clustering classification based on constructed particle optimization neural network classification model based on the user, user can replace electric potential prediction. The results of an example show that the model can effectively predict the potential of users’ energy potential.
Dynamic Trust Models between Users over Social Networks
2016-03-30
the- art hTrust and its variants for solving the trust -link prediction problem. In addition to the above main research results, we developed a...AFRL-AFOSR-JP-TR-2016-0039 Dynamic Trust Models between Users over Social Networks Kazumi Saito University Of Shizuoka Final Report 04/05/2016...2013 to 30-03-2016 4. TITLE AND SUBTITLE (134042) Dynamic Trust Models between Users over Social Networks 5a. CONTRACT NUMBER FA2386-13-1
Precipitation-runoff modeling system; user's manual
Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.
1983-01-01
The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)
Fractional Poisson Fields and Martingales
Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely
2018-01-01
We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.
Fractional Poisson Fields and Martingales
Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely
2018-02-01
We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.
HYDROCARBON SPILL SCREENING MODEL (HSSM) VOLUME 1: USER'S GUIDE
This users guide describes the Hydrocarbon Spill Screening Model (HSSM). The model is intended for simulation of subsurface releases of light nonaqueous phase liquids (LNAPLs). The model consists of separate modules for LNAPL flow through the vadose zone, spreading in the capil...
USERS MANUAL: LANDFILL GAS EMISSIONS MODEL - VERSION 2.0
The document is a user's guide for a computer model, Version 2.0 of the Landfill Gas Emissions Model (LandGEM), for estimating air pollution emissions from municipal solid waste (MSW) landfills. The model can be used to estimate emission rates for methane, carbon dioxide, nonmet...
User verification of the FRBR conceptual model
Pisanski, Jan; Žumer, Maja
2015-01-01
Purpose - The paper aims to build on of a previous study of mental model s of the bibliographic universe, which found that the Functional Requirements for Bibliographic Records (FRBR) conceptual model is intuitive. Design/ methodology/approach - A total 120 participants were presented with a list of bibliographic entities and six graphs each. They were asked to choose the graph they thought best represented the relationships between entities described. Findings - The graph bas ed on the FRBR ...
HTGR Application Economic Model Users' Manual
Energy Technology Data Exchange (ETDEWEB)
A.M. Gandrik
2012-01-01
The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.
Do recommender systems benefit users? a modeling approach
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Building integral projection models: a user's guide.
Rees, Mark; Childs, Dylan Z; Ellner, Stephen P
2014-05-01
In order to understand how changes in individual performance (growth, survival or reproduction) influence population dynamics and evolution, ecologists are increasingly using parameterized mathematical models. For continuously structured populations, where some continuous measure of individual state influences growth, survival or reproduction, integral projection models (IPMs) are commonly used. We provide a detailed description of the steps involved in constructing an IPM, explaining how to: (i) translate your study system into an IPM; (ii) implement your IPM; and (iii) diagnose potential problems with your IPM. We emphasize how the study organism's life cycle, and the timing of censuses, together determine the structure of the IPM kernel and important aspects of the statistical analysis used to parameterize an IPM using data on marked individuals. An IPM based on population studies of Soay sheep is used to illustrate the complete process of constructing, implementing and evaluating an IPM fitted to sample data. We then look at very general approaches to parameterizing an IPM, using a wide range of statistical techniques (e.g. maximum likelihood methods, generalized additive models, nonparametric kernel density estimators). Methods for selecting models for parameterizing IPMs are briefly discussed. We conclude with key recommendations and a brief overview of applications that extend the basic model. The online Supporting Information provides commented R code for all our analyses. © 2014 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
Bijma, K; Engberts, J B F N
This paper describes how the theory of the ''dressed micelle'', which is based on the nonlinear Poisson-Boltzmann equation, can be used to calculate a number of thermodynamic quantities for micellization of sodium p-alkylbenzenesulphonates. From the Gibbs energy of micellization, the enthalpy of
Discrete Feature Model (DFM) User Documentation
International Nuclear Information System (INIS)
Geier, Joel
2008-06-01
This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this software, the
Discrete Feature Model (DFM) User Documentation
Energy Technology Data Exchange (ETDEWEB)
Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))
2008-06-15
This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this
Designing user models in a virtual cave environment
Energy Technology Data Exchange (ETDEWEB)
Brown-VanHoozer, S. [Argonne National Lab., Idaho Falls, ID (United States); Hudson, R. [Argonne National Lab., IL (United States); Gokhale, N. [Madge Networks, San Jose, CA (United States)
1995-12-31
In this paper, the results of a first study into the use of virtual reality for human factor studies and design of simple and complex models of control systems, components, and processes are described. The objective was to design a model in a virtual environment that would reflect more characteristics of the user`s mental model of a system and fewer of the designer`s. The technology of a CAVE{trademark} virtual environment and the methodology of Neuro Linguistic Programming were employed in this study.
XRLSim model specifications and user interfaces
Energy Technology Data Exchange (ETDEWEB)
Ng, L.C.; Gavel, D.T.; Shectman, R.M.; Sholl, P.L.; Woodruff, J.P.
1989-04-01
This report summarizes our FY88 engineering development effort of XRLSim --- an Ada-based computer program developed to provide a realistic simulation of an x-ray laser weapon platform. XRLSim can be used to assess platform requirements in track handoff, target acquisition, tracking, and pointing as well as engagement time line. Development effort continues in FY89 to enhance the model fidelity of the platform and to improve the performance of the tracking algorithms. Simulated targets available in XRLSim include midcourse reentry vehicles and orbiting satellites. At this time, the current version of XRLSim can only simulate a one-on-one engagement scenario. 8 refs., 26 figs.
Natural Poisson structures of nonlinear plasma dynamics
International Nuclear Information System (INIS)
Kaufman, A.N.
1982-06-01
Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering
Natural Poisson structures of nonlinear plasma dynamics
International Nuclear Information System (INIS)
Kaufman, A.N.
1982-01-01
Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering. (Auth.)
Estimation of Poisson noise in spatial domain
Švihlík, Jan; Fliegel, Karel; Vítek, Stanislav; Kukal, Jaromír.; Krbcová, Zuzana
2017-09-01
This paper deals with modeling of astronomical images in the spatial domain. We consider astronomical light images contaminated by the dark current which is modeled by Poisson random process. Dark frame image maps the thermally generated charge of the CCD sensor. In this paper, we solve the problem of an addition of two Poisson random variables. At first, the noise analysis of images obtained from the astronomical camera is performed. It allows estimating parameters of the Poisson probability mass functions in every pixel of the acquired dark frame. Then the resulting distributions of the light image can be found. If the distributions of the light image pixels are identified, then the denoising algorithm can be applied. The performance of the Bayesian approach in the spatial domain is compared with the direct approach based on the method of moments and the dark frame subtraction.
PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON
Directory of Open Access Journals (Sweden)
PUTU SUSAN PRADAWATI
2013-09-01
Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.
Solid Waste Projection Model: Database User's Guide
International Nuclear Information System (INIS)
Blackburn, C.L.
1993-10-01
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established
Modeling mutual feedback between users and recommender systems
Zeng, An; Yeung, Chi Ho; Medo, Matúš; Zhang, Yi-Cheng
2015-07-01
Recommender systems daily influence our decisions on the Internet. While considerable attention has been given to issues such as recommendation accuracy and user privacy, the long-term mutual feedback between a recommender system and the decisions of its users has been neglected so far. We propose here a model of network evolution which allows us to study the complex dynamics induced by this feedback, including the hysteresis effect which is typical for systems with non-linear dynamics. Despite the popular belief that recommendation helps users to discover new things, we find that the long-term use of recommendation can contribute to the rise of extremely popular items and thus ultimately narrow the user choice. These results are supported by measurements of the time evolution of item popularity inequality in real systems. We show that this adverse effect of recommendation can be tamed by sacrificing part of short-term recommendation accuracy.
Modeling and evaluating user behavior in exploratory visual analysis
Energy Technology Data Exchange (ETDEWEB)
Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.; Leigh, Jason
2016-07-25
Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling and evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.
Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps
Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.
2014-12-01
Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.
Modeling Users, Context and Devices for Ambient Assisted Living Environments
Castillejo, Eduardo; Almeida, Aitor; López-de-Ipiña, Diego; Chen, Liming
2014-01-01
The participation of users within AAL environments is increasing thanks to the capabilities of the current wearable devices. Furthermore, the significance of considering user's preferences, context conditions and device's capabilities help smart environments to personalize services and resources for them. Being aware of different characteristics of the entities participating in these situations is vital for reaching the main goals of the corresponding systems efficiently. To collect different information from these entities, it is necessary to design several formal models which help designers to organize and give some meaning to the gathered data. In this paper, we analyze several literature solutions for modeling users, context and devices considering different approaches in the Ambient Assisted Living domain. Besides, we remark different ongoing standardization works in this area. We also discuss the used techniques, modeled characteristics and the advantages and drawbacks of each approach to finally draw several conclusions about the reviewed works. PMID:24643006
Evaluating intersectoral collaboration: a model for assessment by service users
Directory of Open Access Journals (Sweden)
Bengt Ahgren
2009-02-01
Full Text Available Introduction: DELTA was launched as a project in 1997 to improve intersectoral collaboration in the rehabilitation field. In 2005 DELTA was transformed into a local association for financial co-ordination between the institutions involved. Based on a study of the DELTA service users, the purpose of this article is to develop and to validate a model that can be used to assess the integration of welfare services from the perspective of the service users. Theory: The foundation of integration is a well functioning structure of integration. Without such structural conditions, it is difficult to develop a process of integration that combines the resources and competences of the collaborating organisations to create services advantageous for the service users. In this way, both the structure and the process will contribute to the outcome of integration. Method: The study was carried out as a retrospective cross-sectional survey during two weeks, including all the current service users of DELTA. The questionnaire contained 32 questions, which were derived from the theoretical framework and research on service users, capturing perceptions of integration structure, process and outcome. Ordinal scales and open questions where used for the assessment. Results: The survey had a response rate of 82% and no serious biases of the results were detected. The study shows that the users of the rehabilitation services perceived the services as well integrated, relevant and adapted to their needs. The assessment model was tested for reliability and validity and a few modifications were suggested. Some key measurement themes were derived from the study. Conclusion: The model developed in this study is an important step towards an assessment of service integration from the perspective of the service users. It needs to be further refined, however, before it can be used in other evaluations of collaboration in the provision of integrated welfare services.
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic
Liao, Jiaqiang; Yu, Shicheng; Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008-2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse "V" shape and "V" shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across
H2A Production Model, Version 2 User Guide
Energy Technology Data Exchange (ETDEWEB)
Steward, D.; Ramsden, T.; Zuboy, J.
2008-09-01
The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.
Madonna, Erica; Ginsbourger, David; Martius, Olivia
2018-05-01
In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.
Estimating Spoken Dialog System Quality with User Models
Engelbrecht, Klaus-Peter
2013-01-01
Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process. This book examines how user models can be used to support such early evaluations in two ways: by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed. How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...
Energy Technology Data Exchange (ETDEWEB)
Brown-VanHoozer, S.A.
1995-12-31
Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, to minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.
Transferring the Malaria Epidemic Prediction Model to Users in East ...
International Development Research Centre (IDRC) Digital Library (Canada)
Transferring the Malaria Epidemic Prediction Model to Users in East Africa. In the highlands of East Africa, epidemic malaria is an emerging climate-related hazard that urgently needs addressing. Malaria incidence increased by 337% during the 1987 epidemic in Rwanda. In Tanzania, Uganda and Kenya, malaria incidence ...
Relapse Model among Iranian Drug Users: A Qualitative Study.
Jalali, Amir; Seyedfatemi, Naiemeh; Peyrovi, Hamid
2015-01-01
Relapse is a common problem in drug user's rehabilitation program and reported in all over the country. An in-depth study on patients' experiences can be used for exploring the relapse process among drug users. Therefore, this study suggests a model for relapse process among Iranian drug users. In this qualitative study with grounded theory approach, 22 participants with rich information about the phenomenon under the study were selected using purposive, snowball and theoretical sampling methods. After obtaining the informed consent, data were collected based on face-to-face, in-depth, semi-structured interviews. All interviews were analyzed in three stages of axial, selective and open coding methods. Nine main categories emerged, including avoiding of drugs, concerns about being accepted, family atmosphere, social conditions, mental challenge, self-management, self-deception, use and remorse and a main category, feeling of loss as the core variable. Mental challenge has two subcategories, evoking pleasure and craving. Relapse model is a dynamic and systematic process including from cycles of drug avoidance to remorse with a core variable as feeling of loss. Relapse process is a dynamic and systematic process that needs an effective control. Determining a relapse model as a clear process could be helpful in clinical sessions. RESULTS of this research have depicted relapse process among Iranian drugs user by conceptual model.
Using Partial Credit and Response History to Model User Knowledge
Van Inwegen, Eric G.; Adjei, Seth A.; Wang, Yan; Heffernan, Neil T.
2015-01-01
User modelling algorithms such as Performance Factors Analysis and Knowledge Tracing seek to determine a student's knowledge state by analyzing (among other features) right and wrong answers. Anyone who has ever graded an assignment by hand knows that some answers are "more wrong" than others; i.e. they display less of an understanding…
User Modeling and Personalization in the Microblogging Sphere
Gao, Q.
2013-01-01
Microblogging has become a popular mechanism for people to publish, share, and propagate information on the Web. The massive amount of digital traces that people have left in the microblogging sphere, creates new possibilities and poses challenges for user modeling and personalization. How can
Transferring the Malaria Epidemic Prediction Model to Users in East ...
International Development Research Centre (IDRC) Digital Library (Canada)
This project will fine-tune the model, incorporate site-specific factors and transfer it to end users in Kenya, Tanzania and Uganda, and eventually other countries in East Africa. It will enhance the capacity of policymakers and health officials to provide early warning and intervene in an effective manner, and the capacity of ...
A model of user engagement in medical device development.
Grocott, Patricia; Weir, Heather; Ram, Mala Bridgelal
2007-01-01
The purpose of this paper is to address three topical themes: user involvement in health services research; determining the value of new medical technologies in patient care pathways, furthering knowledge related to quality in health and social care; and knowledge exchange between manufacturers, health service supply chain networks and device users. The model is being validated in a case study in progress. The latter is a "proving ground" study for a translational research company. Medical devices play a pivotal role in the management of chronic diseases, across all care settings. Failure to engage users in device development inevitably affects the quality of clinical outcomes. A model of user engagement is presented, turning unmet needs for medical devices into viable commercial propositions. A case study investigating the perceptions of individuals with Epidermolysis Bullosa (EB), their lay and professional carers into unmet needs. EB is an inherited condition affecting the skin and mucosal linings that leads to blistering and wounds. Qualitative data are being collected to generate understanding of unmet needs and wound care products. These needs are being translated into new design concepts and prototypes. Prototypes will be evaluated in an n = 1 experimental design, generating quantitative outcomes data. There are generalisations from the case study, and the model outlined. New products for managing EB wounds can logically benefit other groups. The model is transferable to other clinical problems, which can benefit from research and technological advances that are integral to clinical needs and care.
Poisson hierarchy of discrete strings
Energy Technology Data Exchange (ETDEWEB)
Ioannidou, Theodora, E-mail: ti3@auth.gr [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: Antti.Niemi@physics.uu.se [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China)
2016-01-28
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
Poisson hierarchy of discrete strings
International Nuclear Information System (INIS)
Ioannidou, Theodora; Niemi, Antti J.
2016-01-01
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
DEFF Research Database (Denmark)
Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.
2011-01-01
equilibrium is continuously assured and the pH value is monitored. Results from some selected test simulations of the electrokinetic desalination of a sample of porous material are presented, outlining the versatility of the model as well as showing the effect of the counterion in the removal rate of a target...
Shimaponda-Mataa, Nzooma M; Tembo-Mwase, Enala; Gebreslasie, Michael; Achia, Thomas N O; Mukaratirwa, Samson
2017-11-01
Although malaria morbidity and mortality are greatly reduced globally owing to great control efforts, the disease remains the main contributor. In Zambia, all provinces are malaria endemic. However, the transmission intensities vary mainly depending on environmental factors as they interact with the vectors. Generally in Africa, possibly due to the varying perspectives and methods used, there is variation on the relative importance of malaria risk determinants. In Zambia, the role climatic factors play on malaria case rates has not been determined in combination of space and time using robust methods in modelling. This is critical considering the reversal in malaria reduction after the year 2010 and the variation by transmission zones. Using a geoadditive or structured additive semiparametric Poisson regression model, we determined the influence of climatic factors on malaria incidence in four endemic provinces of Zambia. We demonstrate a strong positive association between malaria incidence and precipitation as well as minimum temperature. The risk of malaria was 95% lower in Lusaka (ARR=0.05, 95% CI=0.04-0.06) and 68% lower in the Western Province (ARR=0.31, 95% CI=0.25-0.41) compared to Luapula Province. North-western Province did not vary from Luapula Province. The effects of geographical region are clearly demonstrated by the unique behaviour and effects of minimum and maximum temperatures in the four provinces. Environmental factors such as landscape in urbanised places may also be playing a role. Copyright © 2017 Elsevier B.V. All rights reserved.
Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.
2016-01-01
Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process
Firewall Mechanism in a User Centric Smart Card Ownership Model
Akram , Raja Naeem; Markantonakis , Konstantinos; Mayes , Keith
2010-01-01
International audience; Multi-application smart card technology facilitates applications to securely share their data and functionality. The security enforcement and assurance in application sharing is provided by the smart card firewall. The firewall mechanism is well defined and studied in the Issuer Centric Smart Card Ownership Model (ICOM), in which a smart card is under total control of its issuer. However, it is not analysed in the User Centric Smart Card Ownership Model (UCOM) that del...
An interoperable and inclusive user modeling concept for simulation and adaptation
Biswas, Pradipta; Kaklanis, Nick; Mohamad, Yehya; Peissner, Matthias; Langdon, Pat; Tzovaras, D.; Jung, Christoph
2013-01-01
User models can be considered as explicit representations of the properties of an individual user including users needs, preferences as well as physical, cognitive and behavioral characteristics. Due to the wide range of applications, it is often difficult to have a common format or even definition of user models. The lack of a common definition also makes different user models even if developed for the same purpose -incompatible to each other. It does not only reduce the portability of user ...
ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual
Energy Technology Data Exchange (ETDEWEB)
Smith, A.B. [ed.; Lawson, R.D.
1998-06-01
The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.
Stimulation model for lenticular sands: Volume 2, Users manual
Energy Technology Data Exchange (ETDEWEB)
Rybicki, E.F.; Luiskutty, C.T.; Sutrick, J.S.; Palmer, I.D.; Shah, G.H.; Tomutsa, L.
1987-07-01
This User's Manual contains information for four fracture/proppant models. TUPROP1 contains a Geertsma and de Klerk type fracture model. The section of the program utilizing the proppant fracture geometry data from the pseudo three-dimensional highly elongated fracture model is called TUPROPC. The analogous proppant section of the program that was modified to accept fracture shape data from SA3DFRAC is called TUPROPS. TUPROPS also includes fracture closure. Finally there is the penny fracture and its proppant model, PENNPROP. In the first three chapters, the proppant sections are based on the same theory for determining the proppant distribution but have modifications to support variable height fractures and modifications to accept fracture geometry from three different fracture models. Thus, information about each proppant model in the User's Manual builds on information supplied in the previous chapter. The exception to the development of combined treatment models is the penny fracture and its proppant model. In this case, a completely new proppant model was developed. A description of how to use the combined treatment model for the penny fracture is contained in Chapter 4. 2 refs.
Almasi-Hashiani, Amir; Mansournia, Mohammad Ali; Sepidarkish, Mahdi; Vesali, Samira; Ghaheri, Azadeh; Esmailzadeh, Arezoo; Omani-Samani, Reza
2018-01-01
Polycystic ovary syndrome (PCOS) is a frequent condition in reproductive age women with a prevalence rate of 5-10%. This study intends to determine the relationship between PCOS and the outcome of assisted reproductive treatment (ART) in Tehran, Iran. In this historical cohort study, we included 996 infertile women who referred to Royan Institute (Tehran, Iran) between January 2012 and December 2013. PCOS, as the main variable, and other potential confounder variables were gathered. Modified Poisson Regression was used for data analysis. Stata software, version 13 was used for all statistical analyses. Unadjusted analysis showed a significantly lower risk for failure in PCOS cases compared to cases without PCOS [risk ratio (RR): 0.79, 95% confidence intervals (CI): 0.66-0.95, P=0.014]. After adjusting for the confounder variables, there was no difference between risk of non-pregnancy in women with and without PCOS (RR: 0.87, 95% CI: 0.72-1.05, P=0.15). Significant predictors of the ART outcome included the treatment protocol type, numbers of embryos transferred (grades A and AB), numbers of injected ampules, and age. The results obtained from this model showed no difference between patients with and without PCOS according to the risk for non-pregnancy. Therefore, other factors might affect conception in PCOS patients. Copyright© by Royan Institute. All rights reserved.
Directory of Open Access Journals (Sweden)
L.B. Bhuiyan
2017-12-01
Full Text Available The modified Poisson-Boltzmann theory of the restricted primitive model double layer is revisited and recast in a fresh, slightly broader perspective. Derivation of relevant equations follow the techniques utilized in the earlier MPB4 and MPB5 formulations and clarifies the relationship between these. The MPB4, MPB5, and a new formulation of the theory are employed in an analysis of the structure and charge reversal phenomenon in asymmetric 2:1/1:2 valence electrolytes. Furthermore, polarization induced surface charge amplification is studied in 3:1/1:3 systems. The results are compared to the corresponding Monte Carlo simulations. The theories are seen to predict the "exact" simulation data to varying degrees of accuracy ranging from qualitative to almost quantitative. The results from a new version of the theory are found to be of comparable accuracy as the MPB5 results in many situations. However, in some cases involving low electrolyte concentrations, theoretical artifacts in the form of un-physical "shoulders" in the singlet ionic distribution functions are observed.
Five-Factor Model personality profiles of drug users
Directory of Open Access Journals (Sweden)
Crum Rosa M
2008-04-01
Full Text Available Abstract Background Personality traits are considered risk factors for drug use, and, in turn, the psychoactive substances impact individuals' traits. Furthermore, there is increasing interest in developing treatment approaches that match an individual's personality profile. To advance our knowledge of the role of individual differences in drug use, the present study compares the personality profile of tobacco, marijuana, cocaine, and heroin users and non-users using the wide spectrum Five-Factor Model (FFM of personality in a diverse community sample. Method Participants (N = 1,102; mean age = 57 were part of the Epidemiologic Catchment Area (ECA program in Baltimore, MD, USA. The sample was drawn from a community with a wide range of socio-economic conditions. Personality traits were assessed with the Revised NEO Personality Inventory (NEO-PI-R, and psychoactive substance use was assessed with systematic interview. Results Compared to never smokers, current cigarette smokers score lower on Conscientiousness and higher on Neuroticism. Similar, but more extreme, is the profile of cocaine/heroin users, which score very high on Neuroticism, especially Vulnerability, and very low on Conscientiousness, particularly Competence, Achievement-Striving, and Deliberation. By contrast, marijuana users score high on Openness to Experience, average on Neuroticism, but low on Agreeableness and Conscientiousness. Conclusion In addition to confirming high levels of negative affect and impulsive traits, this study highlights the links between drug use and low Conscientiousness. These links provide insight into the etiology of drug use and have implications for public health interventions.
Five-Factor Model personality profiles of drug users.
Terracciano, Antonio; Löckenhoff, Corinna E; Crum, Rosa M; Bienvenu, O Joseph; Costa, Paul T
2008-04-11
Personality traits are considered risk factors for drug use, and, in turn, the psychoactive substances impact individuals' traits. Furthermore, there is increasing interest in developing treatment approaches that match an individual's personality profile. To advance our knowledge of the role of individual differences in drug use, the present study compares the personality profile of tobacco, marijuana, cocaine, and heroin users and non-users using the wide spectrum Five-Factor Model (FFM) of personality in a diverse community sample. Participants (N = 1,102; mean age = 57) were part of the Epidemiologic Catchment Area (ECA) program in Baltimore, MD, USA. The sample was drawn from a community with a wide range of socio-economic conditions. Personality traits were assessed with the Revised NEO Personality Inventory (NEO-PI-R), and psychoactive substance use was assessed with systematic interview. Compared to never smokers, current cigarette smokers score lower on Conscientiousness and higher on Neuroticism. Similar, but more extreme, is the profile of cocaine/heroin users, which score very high on Neuroticism, especially Vulnerability, and very low on Conscientiousness, particularly Competence, Achievement-Striving, and Deliberation. By contrast, marijuana users score high on Openness to Experience, average on Neuroticism, but low on Agreeableness and Conscientiousness. In addition to confirming high levels of negative affect and impulsive traits, this study highlights the links between drug use and low Conscientiousness. These links provide insight into the etiology of drug use and have implications for public health interventions.
Analysis on Poisson and Gamma spaces
Kondratiev, Yuri; Silva, Jose Luis; Streit, Ludwig; Us, Georgi
1999-01-01
We study the spaces of Poisson, compound Poisson and Gamma noises as special cases of a general approach to non-Gaussian white noise calculus, see \\cite{KSS96}. We use a known unitary isomorphism between Poisson and compound Poisson spaces in order to transport analytic structures from Poisson space to compound Poisson space. Finally we study a Fock type structure of chaos decomposition on Gamma space.
Simplified analytical model of penetration with lateral loading -- User`s guide
Energy Technology Data Exchange (ETDEWEB)
Young, C.W.
1998-05-01
The SAMPLL (Simplified Analytical Model of Penetration with Lateral Loading) computer code was originally developed in 1984 to realistically yet economically predict penetrator/target interactions. Since the code`s inception, its use has spread throughout the conventional and nuclear penetrating weapons community. During the penetrator/target interaction, the resistance of the material being penetrated imparts both lateral and axial loads on the penetrator. These loads cause changes to the penetrator`s motion (kinematics). SAMPLL uses empirically based algorithms, formulated from an extensive experimental data base, to replicate the loads the penetrator experiences during penetration. The lateral loads resulting from angle of attack and trajectory angle of the penetrator are explicitly treated in SAMPLL. The loads are summed and the kinematics calculated at each time step. SAMPLL has been continually improved, and the current version, Version 6.0, can handle cratering and spall effects, multiple target layers, penetrator damage/failure, and complex penetrator shapes. Version 6 uses the latest empirical penetration equations, and also automatically adjusts the penetrability index for certain target layers to account for layer thickness and confinement. This report describes the SAMPLL code, including assumptions and limitations, and includes a user`s guide.
NASA AVOSS Fast-Time Wake Prediction Models: User's Guide
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.
Coordination of Conditional Poisson Samples
Directory of Open Access Journals (Sweden)
Grafström Anton
2015-12-01
Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.
Poisson Plus Quantification for Digital PCR Systems.
Majumdar, Nivedita; Banerjee, Swapnonil; Pallas, Michael; Wessel, Thomas; Hegerich, Patricia
2017-08-29
Digital PCR, a state-of-the-art nucleic acid quantification technique, works by spreading the target material across a large number of partitions. The average number of molecules per partition is estimated using Poisson statistics, and then converted into concentration by dividing by partition volume. In this standard approach, identical partition sizing is assumed. Violations of this assumption result in underestimation of target quantity, when using Poisson modeling, especially at higher concentrations. The Poisson-Plus Model accommodates for this underestimation, if statistics of the volume variation are well characterized. The volume variation was measured on the chip array based QuantStudio 3D Digital PCR System using the ROX fluorescence level as a proxy for effective load volume per through-hole. Monte Carlo simulations demonstrate the efficacy of the proposed correction. Empirical measurement of model parameters characterizing the effective load volume on QuantStudio 3D Digital PCR chips is presented. The model was used to analyze digital PCR experiments and showed improved accuracy in quantification. At the higher concentrations, the modeling must take effective fill volume variation into account to produce accurate estimates. The extent of the difference from the standard to the new modeling is positively correlated to the extent of fill volume variation in the effective load of your reactions.
Design of personalized search engine based on user-webpage dynamic model
Li, Jihan; Li, Shanglin; Zhu, Yingke; Xiao, Bo
2013-12-01
Personalized search engine focuses on establishing a user-webpage dynamic model. In this model, users' personalized factors are introduced so that the search engine is better able to provide the user with targeted feedback. This paper constructs user and webpage dynamic vector tables, introduces singular value decomposition analysis in the processes of topic categorization, and extends the traditional PageRank algorithm.
Banerji, Anirban; Magarkar, Aniket
2012-09-01
We feel happy when web browsing operations provide us with necessary information; otherwise, we feel bitter. How to measure this happiness (or bitterness)? How does the profile of happiness grow and decay during the course of web browsing? We propose a probabilistic framework that models the evolution of user satisfaction, on top of his/her continuous frustration at not finding the required information. It is found that the cumulative satisfaction profile of a web-searching individual can be modeled effectively as the sum of a random number of random terms, where each term is a mutually independent random variable, originating from ‘memoryless’ Poisson flow. Evolution of satisfaction over the entire time interval of a user’s browsing was modeled using auto-correlation analysis. A utilitarian marker, a magnitude of greater than unity of which describes happy web-searching operations, and an empirical limit that connects user’s satisfaction with his frustration level-are proposed too. The presence of pertinent information in the very first page of a website and magnitude of the decay parameter of user satisfaction (frustration, irritation etc.) are found to be two key aspects that dominate the web user’s psychology. The proposed model employed different combinations of decay parameter, searching time and number of helpful websites. The obtained results are found to match the results from three real-life case studies.
A Probability-Based Hybrid User Model for Recommendation System
Directory of Open Access Journals (Sweden)
Jia Hao
2016-01-01
Full Text Available With the rapid development of information communication technology, the available information or knowledge is exponentially increased, and this causes the well-known information overload phenomenon. This problem is more serious in product design corporations because over half of the valuable design time is consumed in knowledge acquisition, which highly extends the design cycle and weakens the competitiveness. Therefore, the recommender systems become very important in the domain of product domain. This research presents a probability-based hybrid user model, which is a combination of collaborative filtering and content-based filtering. This hybrid model utilizes user ratings and item topics or classes, which are available in the domain of product design, to predict the knowledge requirement. The comprehensive analysis of the experimental results shows that the proposed method gains better performance in most of the parameter settings. This work contributes a probability-based method to the community for implement recommender system when only user ratings and item topics are available.
Designing with users to meet people needs: a teaching model.
Anselmi, Laura; Canina, Marita; Coccioni, Elisabetta
2012-01-01
Being in a context of great transformations of the whole system company-product-market, design becomes interpreter of the society and strategic key-point for production realities. Design must assume an ergonomic approach and a methodology oriented to product innovation where people are the main focus of the system. Today it is visible the need for a methodological approach able to include the context of use employing user's "creative skills". In this scenario, a design educational model based only on knowledge doesn't seem to be fulfilling; the traditional "deductive" method doesn't meet the needs of new productive assets, here the urgency to experiment within the "inductive" method for the development of a method where to know and to know how, theory and practice, act synergistically. The aim is to teach a method able to help a young designer to understand people's needs and desires considering both the concrete/cognitive level and the emotional level. The paper presents, through some case studies, an educational model developed combining theoretical/conceptual and practical/applicatory aspects with user experiential aspects. The proposed approach to design enables the students to investigate users' needs and desires and helps them proposing innovative ideas and projects better fitting today's market realities.
User Adoption Tendency Modeling for Social Contextual Recommendation
Directory of Open Access Journals (Sweden)
Wang Zhisheng
2015-01-01
Full Text Available Most of studies on the existing recommender system for Netflix-style sites (scenarios with explicit user feedback focus on rating prediction, but few have systematically analyzed users’ motivations to make decisions on which items to rate. In this paper, the authors study the difficult and challenging task Item Adoption Prediction (IAP for predicting the items users will rate or interact with. It is not only an important supplement to previous works, but also a more realistic requirement of recommendation in this scenario. To recommend the items with high Adoption Tendency, the authors develop a unified model UATM based on the findings of Marketing and Consumer Behavior. The novelty of the model in this paper includes: First, the authors propose a more creative and effective optimization method to tackle One-Class Problem where only the positive feedback is available; second, the authors systematically and conveniently integrate the user adoption information (both explicit and implicit feedbacks included and the social contextual information with quantitatively characterizing different users’ personal sensitivity to various social contextual influences.
Model for Analysis of Energy Demand (MAED-2). User's manual
International Nuclear Information System (INIS)
2006-01-01
The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application
Model for Analysis of Energy Demand (MAED-2). User's manual
International Nuclear Information System (INIS)
2007-01-01
The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application
Modeling users' activity on Twitter networks: validation of Dunbar's number
Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2012-02-01
Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Modeling users' activity on twitter networks: validation of Dunbar's number.
Directory of Open Access Journals (Sweden)
Bruno Gonçalves
Full Text Available Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Cognitive Modeling of Video Game Player User Experience
Bohil, Corey J.; Biocca, Frank A.
2010-01-01
This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.
Based on user interest level of modeling scenarios and browse content
Zhao, Yang
2017-08-01
User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Interactive Rapid Dose Assessment Model (IRDAM): user's guide
International Nuclear Information System (INIS)
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.
1983-05-01
As part of the continuing emphasis on emergency preparedness the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This User's Guide provides instruction in the setup and operation of the equipment necessary to run IRDAM. Instructions are also given on how to load the magnetic disks and access the interactive part of the program. Two other companion volumes to this one provide additional information on IRDAM. Reactor Accident Assessment Methods (NUREG/CR-3012, Volume 2) describes the technical bases for IRDAM including methods, models and assumptions used in calculations. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios
ACE2 Global Digital Elevation Model : User Analysis
Smith, R. G.; Berry, P. A. M.; Benveniste, J.
2013-12-01
Altimeter Corrected Elevations 2 (ACE2), first released in October 2009, is the Global Digital Elevation Model (GDEM) created by fusing the high accuracy of over 100 million altimeter retracked height estimates, derived primarily from the ERS-1 Geodetic Mission, with the high frequency content available within the near-global Shuttle Radar Topography Mission. This novel ACE2 GDEM is freely available at 3”, 9”, 30” and 5' and has been distributed via the web to over 680 subscribers. This paper presents the results of a detailed analysis of geographical distribution of subscribed users, along with fields of study and potential uses. Investigations have also been performed to determine the most popular spatial resolutions and the impact these have on the scope of data downloaded. The analysis has shown that, even though the majority of users have come from Europe and America, a significant number of website hits have been received from South America, Africa and Asia. Registered users also vary widely, from research institutions and major companies down to individual hobbyists looking at data for single projects.
Modeling metro users' travel behavior in Tehran: Frequency of Use
Directory of Open Access Journals (Sweden)
Amir Reza Mamdoohi
2016-10-01
Full Text Available Transit-oriented development (TOD, as a sustainable supporting strategy, emphasizes the improvement of public transportation coverage and quality, land use density and diversity of around public transportation stations and priority of walking and cycling at station areas. Traffic, environmental and economic problems arising from high growth of personal car, inappropriate distribution of land use, and car-orientation of metropolitan area, necessitate adoption of TOD. In recent years, more researches on urban development and transportation have focused on this strategy. This research considering metro stations as base for development, aims to model metro users' travel behavior and decision-making procedures. In this regard, research question is: what are the parameters or factors affecting in the frequency of travel by metro in half-mile radius from stations. The radius was determine based on TOD definitions and 5 minute walking time to metro stations. A questionnaire was designed in three sections that including travel features by metro, attitudes toward metro, economic and social characteristics of respondents. Ten stations were selected based on their geographic dispersion in Tehran and a sample of 450 respondents was determined. The questionnaires were surveyed face to face in (half-mile vicinity of metro stations. Based on a refined sample on 400 questionnaires ordered discrete choice models were considered. Results of descriptive statistics show that 38.5 percent of the sample used metro more than 4 times per week. Trip purpose for 45.7 percent of metro users is work. Access mode to the metro stations for nearly half of the users (47.6 percent is bus. Results of ordered logit models show a number of significant variables including: habit of using the metro, waiting time in station, trip purpose (working, shopping and recreation, personal car access mode to the metro station, walking access mode to the metro station and being a housewife.
Directory of Open Access Journals (Sweden)
Mehdi Sadeghzadeh
2013-01-01
Full Text Available One of existing challenges in personalization of the web is increasing the efficiency of a web in meeting the users' requirements for the contents they require in an optimal state. All the information associated with the current user behavior following in web and data obtained from pervious users’ interaction in web can provide some necessary keys to recommend presentation of services, productions, and the required information of the users. This study aims at presenting a formal model based on colored Petri nets to identify the present user's interest, which is utilized to recommend the most appropriate pages ahead. In the proposed design, recommendation of the pages is considered with respect to information obtained from pervious users' profile as well as the current session of the present user. This model offers the updated proposed pages to the user by clicking on the web pages. Moreover, an example of web is modeled using CPN Tools. The results of the simulation show that this design improves the precision factor. We explain, through evaluation where the results of this method are more objective and the dynamic recommendations demonstrate that the results of the recommended method improve the precision criterion 15% more than the static method.
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
Background stratified Poisson regression analysis of cohort data
International Nuclear Information System (INIS)
Richardson, David B.; Langholz, Bryan
2012-01-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)
Background stratified Poisson regression analysis of cohort data
Energy Technology Data Exchange (ETDEWEB)
Richardson, David B. [University of North Carolina at Chapel Hill, Department of Epidemiology, School of Public Health, Chapel Hill, NC (United States); Langholz, Bryan [Keck School of Medicine, University of Southern California, Division of Biostatistics, Department of Preventive Medicine, Los Angeles, CA (United States)
2012-03-15
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)
A comparison of Poisson-one-inflated power series distributions for ...
African Journals Online (AJOL)
A class of Poisson-one-inflated power series distributions (the binomial, the Poisson, the negative binomial, the geometric, the log-series and the misrecorded Poisson) are proposed for modeling rural out-migration at the household level. The probability mass functions of the mixture distributions are derived and fitted to the ...
DEFF Research Database (Denmark)
Nielsen, Carsten Søren; Kyllingsbæk, Søren; Markussen, Bo
2015-01-01
, pp. 628–642. http://dx.doi.org/10.1037/a0024751) used a computational shortcut (Equation A5) that strongly reduced the time needed to fit the Poisson counter model to experimental data. Unfortunately, the computational shortcut built on an approximation that was not well-founded in the Poisson...... is not well-founded, and reports the refits, as well as a proposal for a well-founded computational shortcut can be found at http://dx.doi.org/10.1037/xhp0000037.supp We are grateful to Carsten S. Nielsen for bringing this point to our attention and for working with the authors of this article to prepare...
Combining interviewing and modeling for end-user energy conservation
International Nuclear Information System (INIS)
Goldblatt, David L.; Hartmann, Christoph; Duerrenberger, Gregor
2005-01-01
Studying energy consumption through the lens of households is an increasingly popular research avenue. This paper focuses on residential end-user energy conservation. It describes an approach that combines energy modeling and in-depth interviews for communicating about energy use and revealing consumer preferences for change at different levels and intervention points. Expert knowledge was embodied in a computer model for householders that calculates an individual's current energy consumption and helps assess personal savings potentials, while also bringing in socio-technical and economic elements beyond the user's direct control. The paper gives a detailed account of this computer information tool developed for interviewing purposes. It then describes the interview guidelines, data analysis, and main results. In general, interview subjects overestimated the environmental friendliness of their lifestyles. After experience with the program, they tended to rate external (technological, societal) factors as somewhat stronger determinants of their consumption levels than personal (behavioral and household investment) factors, with the notable exception of mobility. Concerning long-term energy perspectives, the majority of subjects felt that society has the ability to make a collective choice towards significantly lower energy consumption levels. Interviewees confirmed that the software and interactive sessions helped them think more holistically about the personal, social, and technological dimensions of energy conservation. Lessons can be applied to the development of future energy communication tools
Downlink Non-Orthogonal Multiple Access (NOMA) in Poisson Networks
Ali, Konpal S.
2018-03-21
A network model is considered where Poisson distributed base stations transmit to $N$ power-domain non-orthogonal multiple access (NOMA) users (UEs) each that employ successive interference cancellation (SIC) for decoding. We propose three models for the clustering of NOMA UEs and consider two different ordering techniques for the NOMA UEs: mean signal power-based and instantaneous signal-to-intercell-interference-and-noise-ratio-based. For each technique, we present a signal-to-interference-and-noise ratio analysis for the coverage of the typical UE. We plot the rate region for the two-user case and show that neither ordering technique is consistently superior to the other. We propose two efficient algorithms for finding a feasible resource allocation that maximize the cell sum rate $\\\\mathcal{R}_{\\ m tot}$, for general $N$, constrained to: 1) a minimum rate $\\\\mathcal{T}$ for each UE, 2) identical rates for all UEs. We show the existence of: 1) an optimum $N$ that maximizes the constrained $\\\\mathcal{R}_{\\ m tot}$ given a set of network parameters, 2) a critical SIC level necessary for NOMA to outperform orthogonal multiple access. The results highlight the importance in choosing the network parameters $N$, the constraints, and the ordering technique to balance the $\\\\mathcal{R}_{\\ m tot}$ and fairness requirements. We also show that interference-aware UE clustering can significantly improve performance.
Boundary Lax pairs from non-ultra-local Poisson algebras
International Nuclear Information System (INIS)
Avan, Jean; Doikou, Anastasia
2009-01-01
We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example.
VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model
Energy Technology Data Exchange (ETDEWEB)
Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern; Steven J. Piet; Benjamin A. Baker; Joseph Grimm
2009-08-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several
VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model
International Nuclear Information System (INIS)
Jacobson, Jacob J.; Jeffers, Robert F.; Matthern, Gretchen E.; Piet, Steven J.; Baker, Benjamin A.; Grimm, Joseph
2009-01-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R and D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft
Directory of Open Access Journals (Sweden)
Jensen Just
2002-05-01
Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.
A user-friendly modified pore-solid fractal model.
Ding, Dian-Yuan; Zhao, Ying; Feng, Hao; Si, Bing-Cheng; Hill, Robert Lee
2016-12-20
The primary objective of this study was to evaluate a range of calculation points on water retention curves (WRC) instead of the singularity point at air-entry suction in the pore-solid fractal (PSF) model, which additionally considered the hysteresis effect based on the PSF theory. The modified pore-solid fractal (M-PSF) model was tested using 26 soil samples from Yangling on the Loess Plateau in China and 54 soil samples from the Unsaturated Soil Hydraulic Database. The derivation results showed that the M-PSF model is user-friendly and flexible for a wide range of calculation point options. This model theoretically describes the primary differences between the soil moisture desorption and the adsorption processes by the fractal dimensions. The M-PSF model demonstrated good performance particularly at the calculation points corresponding to the suctions from 100 cm to 1000 cm. Furthermore, the M-PSF model, used the fractal dimension of the particle size distribution, exhibited an accepted performance of WRC predictions for different textured soils when the suction values were ≥100 cm. To fully understand the function of hysteresis in the PSF theory, the role of allowable and accessible pores must be examined.
Hanford Soil Inventory Model (SIM) Rev. 1 Users Guide
International Nuclear Information System (INIS)
Simpson, Brett C.; Corbin, Rob A.; Anderson, Michael J.; Kincaid, Charles T.
2006-01-01
The focus of the development and application of a soil inventory model as part of the Remediation and Closure Science (RCS) Project managed by PNNL was to develop a probabilistic approach to estimate comprehensive, mass balanced-based contaminant inventories for the Hanford Site post-closure setting. The outcome of this effort was the Hanford Soil Inventory Model (SIM). This document is a user's guide for the Hanford SIM. The principal project requirement for the SIM was to provide comprehensive quantitative estimates of contaminant inventory and its uncertainty for the various liquid waste sites, unplanned releases, and past tank farm leaks as a function of time and location at Hanford. The majority, but not all of these waste sites are in the 200 Areas of Hanford where chemical processing of spent fuel occurred. A computer model capable of performing these calculations and providing satisfactory quantitative output representing a robust description of contaminant inventory and uncertainty for use in other subsequent models was determined to be satisfactory to address the needs of the RCS Project. The ability to use familiar, commercially available software on high-performance personal computers for data input, modeling, and analysis, rather than custom software on a workstation or mainframe computer for modeling, was desired
User-Defined Material Model for Progressive Failure Analysis
Knight, Norman F. Jr.; Reeder, James R. (Technical Monitor)
2006-01-01
An overview of different types of composite material system architectures and a brief review of progressive failure material modeling methods used for structural analysis including failure initiation and material degradation are presented. Different failure initiation criteria and material degradation models are described that define progressive failure formulations. These progressive failure formulations are implemented in a user-defined material model (or UMAT) for use with the ABAQUS/Standard1 nonlinear finite element analysis tool. The failure initiation criteria include the maximum stress criteria, maximum strain criteria, the Tsai-Wu failure polynomial, and the Hashin criteria. The material degradation model is based on the ply-discounting approach where the local material constitutive coefficients are degraded. Applications and extensions of the progressive failure analysis material model address two-dimensional plate and shell finite elements and three-dimensional solid finite elements. Implementation details and use of the UMAT subroutine are described in the present paper. Parametric studies for composite structures are discussed to illustrate the features of the progressive failure modeling methods that have been implemented.
Hanford Soil Inventory Model (SIM) Rev. 1 Users Guide
Energy Technology Data Exchange (ETDEWEB)
Simpson, Brett C.; Corbin, Rob A.; Anderson, Michael J.; Kincaid, Charles T.
2006-09-25
The focus of the development and application of a soil inventory model as part of the Remediation and Closure Science (RCS) Project managed by PNNL was to develop a probabilistic approach to estimate comprehensive, mass balanced-based contaminant inventories for the Hanford Site post-closure setting. The outcome of this effort was the Hanford Soil Inventory Model (SIM). This document is a user's guide for the Hanford SIM. The principal project requirement for the SIM was to provide comprehensive quantitative estimates of contaminant inventory and its uncertainty for the various liquid waste sites, unplanned releases, and past tank farm leaks as a function of time and location at Hanford. The majority, but not all of these waste sites are in the 200 Areas of Hanford where chemical processing of spent fuel occurred. A computer model capable of performing these calculations and providing satisfactory quantitative output representing a robust description of contaminant inventory and uncertainty for use in other subsequent models was determined to be satisfactory to address the needs of the RCS Project. The ability to use familiar, commercially available software on high-performance personal computers for data input, modeling, and analysis, rather than custom software on a workstation or mainframe computer for modeling, was desired.
Graded geometry and Poisson reduction
Cattaneo, A S; Zambon, M
2009-01-01
The main result of [2] extends the Marsden-Ratiu reduction theorem [4] in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof in [2]. Further, we provide an alternative algebraic proof for the main result. ©2009 American Institute of Physics
Proliferation Risk Characterization Model Prototype Model - User and Programmer Guidelines
Energy Technology Data Exchange (ETDEWEB)
Dukelow, J.S.; Whitford, D.
1998-12-01
A model for the estimation of the risk of diversion of weapons-capable materials was developed. It represents both the threat of diversion and site vulnerability as a product of a small number of variables (two to eight), each of which can take on a small number (two to four) of qualitatively defined (but quantitatively implemented) values. The values of the overall threat and vulnerability variables are then converted to threat and vulnerability categories. The threat and vulnerability categories are used to define the likelihood of diversion, also defined categorically. The evaluator supplies an estimate of the consequences of a diversion, defined categorically, but with the categories based on the IAEA Attractiveness levels. Likelihood and Consequences categories are used to define the Risk, also defined categorically. The threat, vulnerability, and consequences input provided by the evaluator contains a representation of his/her uncertainty in each variable assignment which is propagated all the way through to the calculation of the Risk categories. [Appendix G available on diskette only.
Poisson structure of the equations of ideal multispecies fluid electrodynamics
International Nuclear Information System (INIS)
Spencer, R.G.
1984-01-01
The equations of the two- (or multi-) fluid model of plasma physics are recast in Hamiltonian form, following general methods of symplectic geometry. The dynamical variables are the fields of physical interest, but are noncanonical, so that the Poisson bracket in the theory is not the standard one. However, it is a skew-symmetric bilinear form which, from the method of derivation, automatically satisfies the Jacobi identity; therefore, this noncanonical structure has all the essential properties of a canonical Poisson bracket
Modeling integrated water user decisions in intermittent supply systems
Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.
2007-07-01
We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.
GCFM Users Guide Revision for Model Version 5.0
Energy Technology Data Exchange (ETDEWEB)
Keimig, Mark A.; Blake, Coleman
1981-08-10
This paper documents alterations made to the MITRE/DOE Geothermal Cash Flow Model (GCFM) in the period of September 1980 through September 1981. Version 4.0 of GCFM was installed on the computer at the DOE San Francisco Operations Office in August 1980. This Version has also been distributed to about a dozen geothermal industry firms, for examination and potential use. During late 1980 and 1981, a few errors detected in the Version 4.0 code were corrected, resulting in Version 4.1. If you are currently using GCFM Version 4.0, it is suggested that you make the changes to your code that are described in Section 2.0. User's manual changes listed in Section 3.0 and Section 4.0 should then also be made.
Computer Agent's Role in Modeling an Online Math Help User
Directory of Open Access Journals (Sweden)
Dragana Martinovic
2007-06-01
Full Text Available This paper investigates perspectives of deployments of open learner model on mathematics online help sites. It proposes enhancing a regular human-to-human interaction with an involvement of a computer agent suitable for tracking users, checking their input and making useful suggestions. Such a design would provide the most support for the interlocutors while keeping the nature of existing environment intact. Special considerations are given to peer-to-peer and expert-to-student mathematics online help that is free of charge and asynchronous. Examples from other collaborative, Web-based environments are also discussed. Suggestions for improving the existing architectures are given, based on the results of a number of studies on on-line learning systems.
Gorina, Marta; Limonero, Joaquín T; Peñart, Xavier; Jiménez, Jordi; Gassó, Javier
2014-01-01
To determine the level of satisfaction of users that receive home health care through two different models of primary health care: integrated model and dispensaries model. cross-sectional, observational study. Two primary care centers in the province of Barcelona. The questionnaire was administered to 158 chronic patients over 65 years old, of whom 67 were receiving health care from the integrated model, and 91 from the dispensaries model. The Evaluation of Satisfaction with Home Health Care (SATISFAD12) questionnaire was, together with other complementary questions about service satisfaction of home health care, as well as social demographic questions (age, sex, disease, etc). The patients of the dispensaries model showed more satisfaction than the users receiving care from the integrated model. There was a greater healthcare continuity for those patients from the dispensaries model, and a lower percentage of hospitalizations during the last year. The satisfaction of the users from both models was not associated to gender, the health perception,or independence of the The user satisfaction rate of the home care by primary health care seems to depend of the typical characteristics of each organisational model. The dispensaries model shows a higher rate of satisfaction or perceived quality of care in all the aspects analysed. More studies are neede to extrapolate these results to other primary care centers belonging to other institutions. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)
1993-10-01
The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.
System cost model user's manual, version 1.2
International Nuclear Information System (INIS)
Shropshire, D.
1995-06-01
The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites
On the poisson's ratio of the nucleus pulposus.
Farrell, M D; Riches, P E
2013-10-01
Existing experimental data on the Poisson's ratio of nucleus pulposus (NP) tissue is limited. This study aims to determine whether the Poisson's ratio of NP tissue is strain-dependent, strain-rate-dependent, or varies with axial location in the disk. Thirty-two cylindrical plugs of bovine tail NP tissue were subjected to ramp-hold unconfined compression to 20% axial strain in 5% increments, at either 30 μm/s or 0.3 μm/s ramp speeds and the radial displacement determined using biaxial video extensometry. Following radial recoil, the true Poisson's ratio of the solid phase of NP tissue increased linearly with increasing strain and demonstrated strain-rate dependency. The latter finding suggests that the solid matrix undergoes stress relaxation during the test. For small strains, we suggest a Poisson's ratio of 0.125 to be used in biphasic models of the intervertebral disk.
Graphical User Interface for Simulink Integrated Performance Analysis Model
Durham, R. Caitlyn
2009-01-01
The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.
Independent production and Poisson distribution
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1994-01-01
The well-known statement of factorization of inclusive cross-sections in case of independent production of particles (or clusters, jets etc.) and the conclusion of Poisson distribution over their multiplicity arising from it do not follow from the probability theory in any way. Using accurately the theorem of the product of independent probabilities, quite different equations are obtained and no consequences relative to multiplicity distributions are obtained. 11 refs
Liu, Fang; Cao, San-xing; Lu, Rui
2012-04-01
This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad videos and audios transmitted on the network. The user credit assessment model based on clustering ensemble proposed by this paper which integrates the advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit classification automatically. The model's effective verification experiments are accomplished which are based on standard credit application dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is robustness and has better prediction accuracy.
Mobility-Aware User Association in Uplink Cellular Networks
Arshad, Rabe
2017-07-20
This letter studies the mobility aware user-to-BS association policies, within a stochastic geometry framework, in two tier uplink cellular networks with fractional channel inversion power control. Particularly, we model the base stations’ locations using the widely accepted poisson point process and obtain the coverage probability and handover cost expressions for the coupled and decoupled uplink and downlink associations. To this end, we compute the average throughput for the mobile users and study the merits and demerits of each association strategy.
Despont-Gros, Christelle; Mueller, Henning; Lovis, Christian
2005-06-01
This article proposes a model for dimensions involved in user evaluation of clinical information systems (CIS). The model links the dimensions in traditional CIS evaluation and the dimensions from the human-computer interaction (HCI) perspective. In this article, variables are defined as the properties measured in an evaluation, and dimensions are defined as the factors contributing to the values of the measured variables. The proposed model is based on a two-step methodology with: (1) a general review of information systems (IS) evaluations to highlight studied variables, existing models and frameworks, and (2) a review of HCI literature to provide the theoretical basis to key dimensions of user evaluation. The review of literature led to the identification of eight key variables, among which satisfaction, acceptance, and success were found to be the most referenced. Among those variables, IS acceptance is a relevant candidate to reflect user evaluation of CIS. While their goals are similar, the fields of traditional CIS evaluation, and HCI are not closely connected. Combining those two fields allows for the development of an integrated model which provides a model for summative and comprehensive user evaluation of CIS. All dimensions identified in existing studies can be linked to this model and such an integrated model could provide a new perspective to compare investigations of different CIS systems.
Energy Technology Data Exchange (ETDEWEB)
Wu, K.T.; Li, B.; Payne, R.
1992-06-01
This manual presents and describes a package of computer models uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuel-switching, fuels co-firing, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseous-fuel reburning, and co-firing of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a two-dimensional furnace heat-transfer and combustion code: (b) a detailed chemical-kinetics code; and (c) a boiler convective passage code. This user`s manual presents the computer model package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.
Making the Invisible Visible: Personas and Mental Models of Distance Education Library Users
Lewis, Cynthia; Contrino, Jacline
2016-01-01
Gaps between users' and designers' mental models of digital libraries often result in adverse user experiences. This article details an exploratory user research study at a large, predominantly online university serving non-traditional distance education students with the goal of understanding these gaps. Using qualitative data, librarians created…
Park, Eunil; Kim, Ki Joon
2013-01-01
Purpose: The aim of this paper is to propose an integrated path model in order to explore user acceptance of long-term evolution (LTE) services by examining potential causal relationships between key psychological factors and user intention to use the services. Design/methodology/approach: Online survey data collected from 1,344 users are analysed…
Cognitive Support using BDI Agent and Adaptive User Modeling
DEFF Research Database (Denmark)
Hossain, Shabbir
2012-01-01
challenges of an ageing population. This thesis work is one attempt towards that. The thesis focused on research the approaches to provide cognitive support for users with cognitive disabilities through ICT-based technological solutions. The recent advancement of Articial Intelligence and wireless sensor...... networks have shown potential to improve the quality of life of elder people with disabilities using current technologies. The primary objective of this thesis is to conduct research on the approach to provide support for the elderly users with cognitive disabilities. In our research conduct, we have dened...... a set of goals for attaining the objective of this thesis. The initial goal is to recognize the activities of the users to assess the need of support for the user during the activity. However, one of the challenges of the recognition process is the adaptability for variant user behaviour due to physical...
Tîrnăucă, Cristina; Duque, Rafael; Montaña, José L
2017-07-20
A relevant goal in human-computer interaction is to produce applications that are easy to use and well-adjusted to their users' needs. To address this problem it is important to know how users interact with the system. This work constitutes a methodological contribution capable of identifying the context of use in which users perform interactions with a groupware application (synchronous or asynchronous) and provides, using machine learning techniques, generative models of how users behave. Additionally, these models are transformed into a text that describes in natural language the main characteristics of the interaction of the users with the system.
OpenDolphin: presentation models for compelling user interfaces
CERN. Geneva
2014-01-01
Shared applications run on the server. They still need a display, though, be it on the web or on the desktop. OpenDolphin introduces a shared presentation model to clearly differentiate between "what" to display and "how" to display. The "what" is managed on the server and is independent of the UI technology whereas the "how" can fully exploit the UI capabilities like the ubiquity of the web or the power of the desktop in terms of interactivity, animations, effects, 3D worlds, and local devices. If you run a server-centric architecture and still seek to provide the best possible user experience, then this talk is for you. About the speaker Dierk König (JavaOne Rock Star) works as a fellow for Canoo Engineering AG, Basel, Switzerland. He is a committer to many open-source projects including OpenDolphin, Groovy, Grails, GPars and GroovyFX. He is lead author of the "Groovy in Action" book, which is among ...
A user-oriented model for global enterprise portal design
Feng, X.; Ehrenhard, Michel Léon; Hicks, Jeff; Maathuis, Stephanus Johannes; Maathuis, S.J.; Hou, Y.
2010-01-01
Enterprise portals collect and synthesise information from various systems to deliver personalised and highly relevant information to users. Enterprise portals' design and applications are widely discussed in the literature; however, the implications of portal design in a global networked
User Requirements Model for University Timetable Management System
Ahmad Althunibat; Mohammad I. Muhairat
2016-01-01
Automated timetables are used to schedule courses, lectures and rooms in universities by considering some constraints. Inconvenient and ineffective timetables often waste time and money. Therefore, it is important to investigate the requirements and potential needs of users. Thus, eliciting user requirements of University Timetable Management System (TMS) and their implication becomes an important process for the implementation of TMS. On this note, this paper seeks to propose a m...
Energy Technology Data Exchange (ETDEWEB)
VanKuiken, J.C.; Veselka, T.D.; Guziel, K.A.; Blodgett, D.W.; Hamilton, S.; Kavicky, J.A.; Koritarov, V.S.; North, M.J.; Novickas, A.A.; Paprockas, K.R. [and others
1994-11-01
This report describes operating procedures and background documentation for the Argonne Production, Expansion, and Exchange Model for Electrical Systems (APEX). This modeling system was developed to provide the U.S. Department of Energy, Division of Fossil Energy, Office of Coal and Electricity with in-house capabilities for addressing policy options that affect electrical utilities. To meet this objective, Argonne National Laboratory developed a menu-driven programming package that enables the user to develop and conduct simulations of production costs, system reliability, spot market network flows, and optimal system capacity expansion. The APEX system consists of three basic simulation components, supported by various databases and data management software. The components include (1) the investigation of Costs and Reliability in Utility Systems (ICARUS) model, (2) the Spot Market Network (SMN) model, and (3) the Production and Capacity Expansion (PACE) model. The ICARUS model provides generating-unit-level production-cost and reliability simulations with explicit recognition of planned and unplanned outages. The SMN model addresses optimal network flows with recognition of marginal costs, wheeling charges, and transmission constraints. The PACE model determines long-term (e.g., longer than 10 years) capacity expansion schedules on the basis of candidate expansion technologies and load growth estimates. In addition, the Automated Data Assembly Package (ADAP) and case management features simplify user-input requirements. The ADAP, ICARUS, and SMN modules are described in detail. The PACE module is expected to be addressed in a future publication.
Perturbation-induced emergence of Poisson-like behavior in non-Poisson systems
International Nuclear Information System (INIS)
Akin, Osman C; Grigolini, Paolo; Paradisi, Paolo
2009-01-01
The response of a system with ON–OFF intermittency to an external harmonic perturbation is discussed. ON–OFF intermittency is described by means of a sequence of random events, i.e., the transitions from the ON to the OFF state and vice versa. The unperturbed waiting times (WTs) between two events are assumed to satisfy a renewal condition, i.e., the WTs are statistically independent random variables. The response of a renewal model with non-Poisson ON–OFF intermittency, associated with non-exponential WT distribution, is analyzed by looking at the changes induced in the WT statistical distribution by the harmonic perturbation. The scaling properties are also studied by means of diffusion entropy analysis. It is found that, in the range of fast and relatively strong perturbation, the non-Poisson system displays a Poisson-like behavior in both WT distribution and scaling. In particular, the histogram of perturbed WTs becomes a sequence of equally spaced peaks, with intensity decaying exponentially in time. Further, the diffusion entropy detects an ordinary scaling (related to normal diffusion) instead of the expected unperturbed anomalous scaling related to the inverse power-law decay. Thus, an analysis based on the WT histogram and/or on scaling methods has to be considered with some care when dealing with perturbed intermittent systems
White, Michael J; Harmel, R Daren; Arnold, Jeff G; Williams, Jimmy R
2014-01-01
The Soil and Water Assessment Tool (SWAT) is a basin-scale hydrologic model developed by the United States Department of Agriculture Agricultural Research Service. SWAT's broad applicability, user-friendly model interfaces, and automatic calibration software have led to a rapid increase in the number of new users. These advancements also allow less experienced users to conduct SWAT modeling applications. In particular, the use of automated calibration software may produce simulated values that appear appropriate because they adequately mimic measured data used in calibration and validation. Autocalibrated model applications (and often those of unexperienced modelers) may contain input data errors and inappropriate parameter adjustments not readily identified by users or the autocalibration software. The objective of this research was to develop a program to assist users in the identification of potential model application problems. The resulting "SWAT Check" is a stand-alone Microsoft Windows program that (i) reads selected SWAT output and alerts users of values outside the typical range; (ii) creates process-based figures for visualization of the appropriateness of output values, including important outputs that are commonly ignored; and (iii) detects and alerts users of common model application errors. By alerting users to potential model application problems, this software should assist the SWAT community in developing more reliable modeling applications. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Stress Calculation of a TRISO Coated Particle Fuel by Using a Poisson's Ratio in Creep Condition
International Nuclear Information System (INIS)
Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.; Kim, W. K.
2007-01-01
KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) project since 2004, has been developing a performance analysis code for the TRISO coated particle fuel named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. KAERI, on the other hand, is developing an ABAQUS based finite element(FE) model to cover the non-linear behaviors of a coated particle fuel such as cracking or debonding of the TRISO coating layers. Using the ABAQUS based FE model, verification calculations were carried out for the IAEA CRP-6 benchmark problems involving creep, swelling, and pressure. However, in this model the Poisson's ratio for elastic solution was used for creep strain calculation. In this study, an improvement is made for the ABAQUS based finite element model by using the Poisson's ratio in creep condition for the calculation of the creep strain rate. As a direct input of the coefficient in a creep condition is impossible, a user subroutine for the ABAQUS solution is prepared in FORTRAN for use in the calculations of the creep strain of the coating layers in the radial and hoop directions of the spherical fuel. This paper shows the calculation results of a TRISO coated particle fuel subject to an irradiation condition assumed as in the Miller's publication in comparison with the results obtained from the old FE model used in the CRP-6 benchmark calculations
User Instructions for the Policy Analysis Modeling System (PAMS)
Energy Technology Data Exchange (ETDEWEB)
McNeil, Michael A.; Letschert, Virginie E.; Van Buskirk, Robert D.
2018-03-13
PAMS uses country-specific and product-specific data to calculate estimates of impacts of a Minimum Efficiency Performance Standard (MEPS) program. The analysis tool is self-contained in a Microsoft Excel spreadsheet, and requires no links to external data, or special code additions to run. The analysis can be customized to a particular program without additional user input, through the use of the pull-down menus located on the Summary page. In addition, the spreadsheet contains many areas into which user-generated input data can be entered for increased accuracy of projection. The following is a step-by-step guide for using and customizing the tool.
Deriving user-informed climate information from climate model ensemble results
Directory of Open Access Journals (Sweden)
H. Huebener
2017-07-01
Full Text Available Communication between providers and users of climate model simulation results still needs to be improved. In the German regional climate modeling project ReKliEs-De a midterm user workshop was conducted to allow the intended users of the project results to assess the preliminary results and to streamline the final project results to their needs. The user feedback highlighted, in particular, the still considerable gap between climate research output and user-tailored input for climate impact research. Two major requests from the user community addressed the selection of sub-ensembles and some condensed, easy to understand information on the strengths and weaknesses of the climate models involved in the project.
User Information Fusion Decision Making Analysis with the C-OODA Model
2011-07-01
Observe-Orient-Decide-Act (C- OODA) model as a method of user and team analysis in the context of the Data Fusion Information Group ( DFIG ) Information...Fusion Model. From the DFIG model [as an update to the Joint Directors of the Lab (JDL) model], we look at Level 5 Fusion of “user refinement” in...OODA comparisons to the DFIG model support systems evaluation and analysis as well as coordinating the time interval of interaction between the machine
Development of a user-friendly interface version of the Salmonella source-attribution model
DEFF Research Database (Denmark)
Hald, Tine; Lund, Jan
with a user-manual, which is also part of this report. Users of the interface are recommended to read this report before starting using the interface to become familiar with the model principles and the mathematics behind, which is required in order to interpret the model results and assess the validity...
Stated choice models for predicting the impact of user fees at public recreation sites
Herbert W. Schroeder; Jordan Louviere
1999-01-01
A crucial question in the implementation of fee programs is how the users of recreation sites will respond to various levels and types of fees. Stated choice models can help managers anticipate the impact of user fees on people's choices among the alternative recreation sites available to them. Models developed for both day and overnight trips to several areas and...
Visual imagery and the user model applied to fuel handling at EBR-II
International Nuclear Information System (INIS)
Brown-VanHoozer, S.A.
1995-01-01
The material presented in this paper is based on two studies involving visual display designs and the user's perspective model of a system. The studies involved a methodology known as Neuro-Linguistic Programming (NLP), and its use in expanding design choices which included the ''comfort parameters'' and ''perspective reality'' of the user's model of the world. (author)
Simulation Architecture for Modelling Interaction Between User and Elbow-articulated Exoskeleton
Kruif, B.J. de; Schmidhauser, E.; Stadler, K.S.; O'Sullivan, L.W.
2017-01-01
The aim of our work is to improve the existing user-exoskeleton models by introducing a simulation architecture that can simulate its dynamic interaction, thereby altering the initial motion of the user. A simulation architecture is developed that uses the musculoskeletal models from OpenSim, and
Comparison between two bivariate Poisson distributions through the ...
African Journals Online (AJOL)
To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive. In this paper, we show that these models are nearly everywhere asymptotically equal. From this survey that the ø-divergence converges toward zero, both models are ...
Directory of Open Access Journals (Sweden)
Vesna Kumbaroska
2017-04-01
Full Text Available Revealing an endless array of user behaviors in an online environment is a very good indicator of the user’s interests either in the process of browsing or in purchasing. One such behavior is the navigation behavior, so detected user navigation patterns are able to be used for practical purposes such as: improving user engagement, turning most browsers into buyers, personalize content or interface, etc. In this regard, our research represents a connection between navigation modelling and user engagement. A usage of the Generalized Stochastic Petri Nets concept for stochastic behavioral-based modelling of the navigation process is proposed for measuring user engagement components. Different types of users are automatically identified and clustered according to their navigation behaviors, thus the developed model gives great insight into the navigation process. As part of this study, Peterson’s model for measuring the user engagement is explored and a direct calculation of its components is illustrated. At the same time, asssuming that several user sessions/visits are initialized in a certain time frame, following the Petri Nets dynamics is indicating that the proposed behavioral – based model could be used for user engagement metrics calculation, thus some basic ideas are discussed, and initial directions are given.
Structural equation modeling of users' response to wilderness recreation fees
Daniel R. Williams; Christine A. Vogt; Joar Vitterso
1999-01-01
This paper examines wilderness users' response to recently established overnight camping fees at the Desolation Wilderness in California. Fee program evaluations have typically focused on economic or revenue issues, distributional or equity impacts of various pricing strategies, and questions of price fairness. In the case of wilderness recreation fees, it is also...
HIV prevention among drug and alcohol users: models of ...
African Journals Online (AJOL)
The spread of HIV among drug and alcohol users, as a high-risk group, is a significant problem in Africa, as in other parts of the world. Few programs have been implemented in Africa to deal specifically with this issue. Since November 2006, the AED Capable Partners Program in Kenya project has provided technical ...
Semiotic user interface analysis of building information model systems
Hartmann, Timo
2013-01-01
To promote understanding of how to use building information (BI) systems to support communication, this paper uses computer semiotic theory to study how user interfaces of BI systems operate as a carrier of meaning. More specifically, the paper introduces a semiotic framework for the analysis of BI
Parasites et parasitoses des poissons
De Kinkelin, Pierre; Morand, Marc; Hedrick, Ronald; Michel, Christian
2014-01-01
Cet ouvrage, richement illustré, offre un panorama représentatif des agents parasitaires rencontrés chez les poissons. S'appuyant sur les nouvelles conceptions de la classification phylogénétique, il met l'accent sur les propriétés biologiques, l'épidémiologie et les conséquences cliniques des groupes d'organismes en cause, à la lumière des avancées cognitives permises par les nouveaux outils de la biologie. Il est destiné à un large public, allant du monde de l'aquaculture à ceux de la santé...
Dualizing the Poisson summation formula.
Duffin, R J; Weinberger, H F
1991-01-01
If f(x) and g(x) are a Fourier cosine transform pair, then the Poisson summation formula can be written as 2sumfrominfinityn = 1g(n) + g(0) = 2sumfrominfinityn = 1f(n) + f(0). The concepts of linear transformation theory lead to the following dual of this classical relation. Let phi(x) and gamma(x) = phi(1/x)/x have absolutely convergent integrals over the positive real line. Let F(x) = sumfrominfinityn = 1phi(n/x)/x - integralinfinity0phi(t)dt and G(x) = sumfrominfinityn = 1gamma (n/x)/x - integralinfinity0 gamma(t)dt. Then F(x) and G(x) are a Fourier cosine transform pair. We term F(x) the "discrepancy" of phi because it is the error in estimating the integral phi of by its Riemann sum with the constant mesh spacing 1/x. PMID:11607208
Singular reduction of Nambu-Poisson manifolds
Das, Apurba
The version of Marsden-Ratiu Poisson reduction theorem for Nambu-Poisson manifolds by a regular foliation have been studied by Ibáñez et al. In this paper, we show that this reduction procedure can be extended to the singular case. Under a suitable notion of Hamiltonian flow on the reduced space, we show that a set of Hamiltonians on a Nambu-Poisson manifold can also be reduced.
LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.
Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl
2015-08-01
Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average
An Automatic User Grouping Model for a Group Recommender System in Location-Based Social Networks
Directory of Open Access Journals (Sweden)
Elahe Khazaei
2018-02-01
Full Text Available Spatial group recommendation refers to suggesting places to a given set of users. In a group recommender system, members of a group should have similar preferences in order to increase the level of satisfaction. Location-based social networks (LBSNs provide rich content, such as user interactions and location/event descriptions, which can be leveraged for group recommendations. In this paper, an automatic user grouping model is introduced that obtains information about users and their preferences through an LBSN. The preferences of the users, proximity of the places the users have visited in terms of spatial range, users’ free days, and the social relationships among users are extracted automatically from location histories and users’ profiles in the LBSN. These factors are combined to determine the similarities among users. The users are partitioned into groups based on these similarities. Group size is the key to coordinating group members and enhancing their satisfaction. Therefore, a modified k-medoids method is developed to cluster users into groups with specific sizes. To evaluate the efficiency of the proposed method, its mean intra-cluster distance and its distribution of cluster sizes are compared to those of general clustering algorithms. The results reveal that the proposed method compares favourably with general clustering approaches, such as k-medoids and spectral clustering, in separating users into groups of a specific size with a lower mean intra-cluster distance.
Research on user behavior authentication model based on stochastic Petri nets
Zhang, Chengyuan; Xu, Haishui
2017-08-01
A behavioural authentication model based on stochastic Petri net is proposed to meet the randomness, uncertainty and concurrency characteristics of user behaviour. The use of random models in the location, changes, arc and logo to describe the characteristics of a variety of authentication and game relationships, so as to effectively implement the graphical user behaviour authentication model analysis method, according to the corresponding proof to verify the model is valuable.
INTERLINE, a railroad routing model: program description and user's manual
International Nuclear Information System (INIS)
Peterson, B.E.
1985-11-01
INTERLINE is an interactive computer program that finds likely routes for shipments over the US railroad system. It is based on a shortest path algorithm modified both to reflect the nature of railroad company operations and to accommodate computer resource limitations in dealing with a large transportation network. The first section of the report discusses the nature of railroad operations and routing practices in the United States, including the tendency to concentrate traffic on a limited number of mainlines, the competition for traffic by different companies operating in the same corridors, and the tendency of originating carriers to retain traffic on their systems before transferring it to terminating carriers. The theoretical foundation and operation of shortest path algorithms are described, as well as the techniques used to simulate actual operating practices within this framework. The second section is a user's guide that describes the program operation and data structures, program features, and user access. 11 refs., 11 figs
Poisson-Fermi Formulation of Nonlocal Electrostatics in Electrolyte Solutions
Directory of Open Access Journals (Sweden)
Liu Jinn-Liang
2017-10-01
Full Text Available We present a nonlocal electrostatic formulation of nonuniform ions and water molecules with interstitial voids that uses a Fermi-like distribution to account for steric and correlation efects in electrolyte solutions. The formulation is based on the volume exclusion of hard spheres leading to a steric potential and Maxwell’s displacement field with Yukawa-type interactions resulting in a nonlocal electric potential. The classical Poisson-Boltzmann model fails to describe steric and correlation effects important in a variety of chemical and biological systems, especially in high field or large concentration conditions found in and near binding sites, ion channels, and electrodes. Steric effects and correlations are apparent when we compare nonlocal Poisson-Fermi results to Poisson-Boltzmann calculations in electric double layer and to experimental measurements on the selectivity of potassium channels for K+ over Na+.
OCTAVIA, Johanna; BEZNOSYK, Anastasiia; CONINX, Karin; QUAX, Peter; LUYTEN, Kris
2011-01-01
This paper focuses on how adaptation of users' roles based on a collaborative user model can improve group interaction in collaborative 3D games. We aim to provide adaptation for users based on their individual performance and preferences while collaborating in a 3D puzzle game. Four different user modeling approaches are considered to build collaborative user models. Through an experiment, we present the validation of these approaches for two different cases: co-located collaboration and rem...
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified......, and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
2010-01-01
In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points...... are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
International Nuclear Information System (INIS)
Khalid, S.; Alam, A.
2016-01-01
Collaborative Virtual Environments (CVEs) falls under Virtual Reality (VR) where two or more users manipulate objects collaboratively. In this paper we have made some experiments to make assembly from constituents parts scattered in Virtual Environment (VE) based on task distribution model using assistance functions for checking and enhancing user performance. The CVEs subjects setting on distinct connected machines via local area network. In this perspective, we consider the effects of assistance function with oral communication on collaboration, co-presence and users performance. Twenty subjects performed collaboratively an assembly task on static and dynamic based task distribution. We examine the degree of influence of assistance function with oral communications on user's performance based on task distribution model. The results show that assistance functions with oral communication based on task distribution model not only increase user performance but also enhance the sense of copresence and awareness. (author)
Kraemer, Silvie M; Mosler, Hans-Joachim
2011-01-01
Solar water disinfection (SODIS) is a sustainable household water treatment technique that could prevent millions of deaths caused by diarrhoea. The behaviour change process necessary to move from drinking raw water to drinking SODIS is analysed with the Transtheoretical Model of Change (TTM). User groups and psychological factors that differentiate between types of users are identified. Results of a 1.5 year longitudinal study in Zimbabwe reveal distinguishing factors between groups, from which it can be deduced that they drive the development of user groups. Implications are drawn for campaigns with the aim of bringing all user types to a regular use.
Tracking and Analysis Framework (TAF) model documentation and user`s guide
Energy Technology Data Exchange (ETDEWEB)
Bloyd, C.; Camp, J.; Conzelmann, G. [and others
1996-12-01
With passage of the 1990 Clean Air Act Amendments, the United States embarked on a policy for controlling acid deposition that has been estimated to cost at least $2 billion. Title IV of the Act created a major innovation in environmental regulation by introducing market-based incentives - specifically, by allowing electric utility companies to trade allowances to emit sulfur dioxide (SO{sub 2}). The National Acid Precipitation Assessment Program (NAPAP) has been tasked by Congress to assess what Senator Moynihan has termed this {open_quotes}grand experiment.{close_quotes} Such a comprehensive assessment of the economic and environmental effects of this legislation has been a major challenge. To help NAPAP face this challenge, the U.S. Department of Energy (DOE) has sponsored development of an integrated assessment model, known as the Tracking and Analysis Framework (TAF). This section summarizes TAF`s objectives and its overall design.
Associative and Lie deformations of Poisson algebras
Remm, Elisabeth
2011-01-01
Considering a Poisson algebra as a non associative algebra satisfying the Markl-Remm identity, we study deformations of Poisson algebras as deformations of this non associative algebra. This gives a natural interpretation of deformations which preserves the underlying associative structure and we study deformations which preserve the underlying Lie algebra.
User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model
Paul, D. D., Jr.
1972-01-01
The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.
Communications network design and costing model users manual
Logan, K. P.; Somes, S. S.; Clark, C. A.
1983-01-01
The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.
Alharbi, Basma Mohammed
2017-04-10
In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume
Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia.
Segkouli, Sofia; Paliokas, Ioannis; Tzovaras, Dimitrios; Tsakiris, Thanos; Tsolaki, Magda; Karagiannidis, Charalampos
2015-01-01
Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI) such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs) requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM) simulating mild cognitive impairment (MCI) through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users' cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces' design supported by increased tasks' complexity to capture a more detailed profile of users' capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces' evaluation through simulation on the basis of virtual models of MCI users.
DEFF Research Database (Denmark)
Hansen, Jane Holm; Haase, Louise Møller
2017-01-01
The persona model is a widely know tool for synthesizing user research. A persona is a hypothetical archetype based on actual users, which is typically created to create a shared understanding of the user in the design team. Previous research has focused on the personal model as a consensus......-making tool. However, in this paper the aim is to explore, whether the persona model can also be useful and valuable for collecting user insights. More specifically, the paper investigates the potentials and challenges of using the persona model as a generative tool to achieve user insight, when co...... as a generative tool and so far the empirical study includes only two co-creation-workshops, which is too few to make any solid conclusions. Still, the study indicates some interesting insights about the potentials and challenges the persona model has, when used as a generative tool....
Lee, S. S.; Sengupta, S.; Nwadike, E. V.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter.
User's guide to the LIRAQ model: an air pollution model for the San Francisco Bay Area
International Nuclear Information System (INIS)
MacCracken, M.C.
1975-01-01
The Livermore Regional Air Quality (LIRAQ) model comprises a set of computer programs that have been integrated into an easily used tool for the air quality planner. To assemble and modify the necessary data files and to direct model execution, a problem formulation program has been developed that makes possible the setup of a wide variety of studies involving perturbation of the emission inventory, changes to the initial and boundary conditions, and different choices of grid size and problem domain. In addition to describing the types of air quality problems for which the LIRAQ model may be used, this User's Guide provides detailed information on how to set up and conduct model simulations. Also included are descriptions of the formats of input data files so that the LIRAQ model may be applied to regions other than the San Francisco Bay Area
Site-Specific Study of In-Building Wireless Solutions with Poisson Traffic
DEFF Research Database (Denmark)
Liu, Zhen; Sørensen, Troels Bundgaard; Mogensen, Preben
2011-01-01
traffic model with fixed buffer size and Poisson arrival. Our new results show better performance for Femto cells with frequency reuse 1 at light to medium load, although the intelligent distributed system still obtains considerable better cell edge user throughput for the same number of access points....... system - together with another multi-cell system using our proposed centralized scheduling scheme. In our previous work, their performance is evaluated and compared in the LTE downlink context with full buffer traffic. Compared to real mobile networks, the full buffer traffic model is usually a worst......-case estimation of traffic load which causes severe interference conditions. Especially for Femto cells with universal frequency reuse it degrades system performance and may lead to biased conclusions on the relative performance of the different in-building solutions. In this study, we use a more realistic...
Spud and FLML: generalising and automating the user interfaces of scientific computer models
Ham, D. A.; Farrell, P. E.; Maddison, J. R.; Gorman, G. J.; Wilson, C. R.; Kramer, S. C.; Shipton, J.; Collins, G. S.; Cotter, C. J.; Piggott, M. D.
2009-04-01
The interfaces by which users specify the scenarios to be simulated by scientific computer models are frequently primitive, under-documented and ad-hoc text files which make using the model in question difficult and error-prone and significantly increase the development cost of the model. We present a model-independent system, Spud[1], which formalises the specification of model input formats in terms of formal grammars. This is combined with an automatically generated graphical user interface which guides users to create valid model inputs based on the grammar provided, and a generic options reading module which minimises the development cost of adding model options. We further present FLML, the Fluidity Markup Language. FLML applies Spud to the Imperial College Ocean Model (ICOM) resulting in a graphically driven system which radically improves the usability of ICOM. As well as a step forward for ICOM, FLML illustrates how the Spud system can be applied to an existing complex ocean model highlighting the potential of Spud as a user interface for other codes in the ocean modelling community. [1] Ham, D. A. et.al, Spud 1.0: generalising and automating the user interfaces of scientific computer models, Geosci. Model Dev. Discuss., 1, 125-146, 2008.
The USEtox story: A survey of model developer visions and user requirements
DEFF Research Database (Denmark)
Westh, Torbjørn Bochsen; Hauschild, Michael Zwicky; Birkved, Morten
2015-01-01
into LCA software and methods, (4) improve update/testing procedures, (5) strengthen communication between developers and users, and (6) extend model scope. By generalizing our recommendations to guide scientific model development in a broader context, we emphasize to acknowledge different levels of user......, we analyzed user expectations and experiences and compared them with the developers’ visions. Methods We applied qualitative and quantitative data collection methods including an online questionnaire, semistructured user and developer interviews, and review of scientific literature. Questionnaire...... and interview results were analyzed in an actor-network perspective in order to understand user needs and to compare these with the developers’ visions. Requirement engineering methods, more specifically function tree, system context, and activity diagrams, were iteratively applied and structured to develop...
Poisson Regression Analysis of Illness and Injury Surveillance Data
Energy Technology Data Exchange (ETDEWEB)
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson
Modeling Integrated Water-User Decisions with Intermittent Supplies
Lund, J. R.; Rosenberg, D.
2006-12-01
We present an economic-engineering method to estimate urban water use demands with intermittent water supplies. A two-stage, probabilistic optimization formulation includes a wide variety of water supply enhancement and conservation actions that individual households can adopt to meet multiple water quality uses with uncertain water availability. We embed the optimization in Monte-Carlo simulations to show aggregate effects at a utility (citywide) scale for a population of user conditions and decisions. Parametric analysis provides derivations of supply curves to subsidize conservation, demand responses to alternative pricing, and customer willingness-to-pay to avoid shortages. Results show a good empirical fit for the average and distribution of billed residential water use in Amman, Jordan. Additional outputs give likely market penetration rates for household conservation actions, associated water savings, and subsidies required to entice further adoption. We discuss new insights to size, target, market, and finance conservation programs and interpret a demand curve with block pricing.
Users guide for the hydroacoustic coverage assessment model (HydroCAM)
Energy Technology Data Exchange (ETDEWEB)
Farrell, T., LLNL
1997-12-01
A model for predicting the detection and localization performance of hydroacoustic monitoring networks has been developed. The model accounts for major factors affecting global-scale acoustic propagation in the ocean. including horizontal refraction, travel time variability due to spatial and temporal fluctuations in the ocean, and detailed characteristics of the source. Graphical user interfaces are provided to setup the models and visualize the results. The model produces maps of network detection coverage and localization area of uncertainty, as well as intermediate results such as predicted path amplitudes, travel time and travel time variance. This Users Guide for the model is organized into three sections. First a summary of functionality available in the model is presented, including example output products. The second section provides detailed descriptions of each of models contained in the system. The last section describes how to run the model, including a summary of each data input form in the user interface.
Model based estimation for multi-modal user interface component selection
CSIR Research Space (South Africa)
Coetzee, L
2009-12-01
Full Text Available for inter- action are well understood, the general problem of integrated multi-modal systems are yet to be understood to the same level. User modelling plays an important role within user- adaptive systems. Kobsa [5] presents a review on the devel... and providers of services and therefore user modelling tools will continue to play an important role in computer systems. Even though the utilisation of multiple modalities to break down the access barrier has been addressed by several re- searchers...
Modeling Goal-Directed User Exploration in Human-Computer Interaction
2011-02-01
sculptures architecture Theater Musicians & Composers Cinema , Television, & Broadcasting Music Dance Musical Instruments” The text entered for the...example, if LSA is selected, one can select the appropriate LSA semantic space (corpus) to best match the demographics of the type of users the modeler...semantic space (corpus) to best match the demographics of the type of users the modeler wants predictions for. In the “Eagerness to Choose” field
Multi-parameter full waveform inversion using Poisson
Oh, Juwon
2016-07-21
In multi-parameter full waveform inversion (FWI), the success of recovering each parameter is dependent on characteristics of the partial derivative wavefields (or virtual sources), which differ according to parameterisation. Elastic FWIs based on the two conventional parameterisations (one uses Lame constants and density; the other employs P- and S-wave velocities and density) have low resolution of gradients for P-wave velocities (or ). Limitations occur because the virtual sources for P-wave velocity or (one of the Lame constants) are related only to P-P diffracted waves, and generate isotropic explosions, which reduce the spatial resolution of the FWI for these parameters. To increase the spatial resolution, we propose a new parameterisation using P-wave velocity, Poisson\\'s ratio, and density for frequency-domain multi-parameter FWI for isotropic elastic media. By introducing Poisson\\'s ratio instead of S-wave velocity, the virtual source for the P-wave velocity generates P-S and S-S diffracted waves as well as P-P diffracted waves in the partial derivative wavefields for the P-wave velocity. Numerical examples of the cross-triangle-square (CTS) model indicate that the new parameterisation provides highly resolved descent directions for the P-wave velocity. Numerical examples of noise-free and noisy data synthesised for the elastic Marmousi-II model support the fact that the new parameterisation is more robust for noise than the two conventional parameterisations.
Constructions and classifications of projective Poisson varieties
Pym, Brent
2018-03-01
This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.
Standard Test Method for Determining Poisson's Ratio of Honeycomb Cores
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the honeycomb Poisson's ratio from the anticlastic curvature radii, see . 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Kopanitsa, Georgy; Veseli, Hasan; Yampolsky, Vladimir
2015-06-01
When medical data have been successfully recorded or exchanged between systems there appear a need to present the data consistently to ensure that it is clearly understood and interpreted. A standard based user interface can provide interoperability on the visual level. The goal of this research was to develop, implement and evaluate an information model for building user interfaces for archetype based medical data. The following types of knowledge were identified as important elements and were included in the information model: medical content related attributes, data type related attributes, user-related attributes, device-related attributes. In order to support flexible and efficient user interfaces an approach that represents different types of knowledge with different models separating the medical concept from a visual concept and interface realization was chosen. We evaluated the developed approach using Guideline for Good Evaluation Practice in Health Informatics (GEP-HI). We developed a higher level information model to complement the ISO 13606 archetype model. This enabled the specification of the presentation properties at the moment of the archetypes' definition. The model allows realizing different users' perspectives on the data. The approach was implemented and evaluated within a functioning EHR system. The evaluation involved 30 patients of different age and IT experience and 5 doctors. One month of testing showed that the time required reading electronic health records decreased for both doctors (from average 310 to 220s) and patients (from average 95 to 39s). Users reported a high level of satisfaction and motivation to use the presented data visualization approach especially in comparison with their previous experience. The introduced information model allows separating medical knowledge and presentation knowledge. The additional presentation layer will enrich the graphical user interface's flexibility and will allow an optimal presentation of
Seasonally adjusted birth frequencies follow the Poisson distribution.
Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A
2015-12-15
Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p variables is significantly improved (p variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.
Solar Advisor Model User Guide for Version 2.0
Energy Technology Data Exchange (ETDEWEB)
Gilman, P.; Blair, N.; Mehos, M.; Christensen, C.; Janzou, S.; Cameron, C.
2008-08-01
The Solar Advisor Model (SAM) provides a consistent framework for analyzing and comparing power system costs and performance across the range of solar technologies and markets, from photovoltaic systems for residential and commercial markets to concentrating solar power and large photovoltaic systems for utility markets. This manual describes Version 2.0 of the software, which can model photovoltaic and concentrating solar power technologies for electric applications for several markets. The current version of the Solar Advisor Model does not model solar heating and lighting technologies.
Tang, Jessica Pui-Shan; Tse, Samson Shu-Ki; Davidson, Larry; Cheng, Patrick
2017-12-22
Current models of user participation in mental health services were developed within Western culture and thus may not be applicable to Chinese communities. To present a new model of user participation, which emerged from research within a Chinese community, for understanding the processes of and factors influencing user participation in a non-Western culture. Multiple qualitative methods, including focus groups, individual in-depth interviews, and photovoice, were applied within the framework of constructivist grounded theory and collaborative research. Diverging from conceptualizations of user participation with emphasis on civil rights and the individual as a central agent, participants in the study highlighted the interpersonal dynamics between service users and different players affecting the participation intensity and outcomes. They valued a reciprocal relationship with their caregivers in making treatment decisions, cooperated with staff to observe power hierarchies and social harmony, identified the importance of peer support in enabling service engagement and delivery, and emphasized professional facilitation in advancing involvement at the policy level. User participation in Chinese culture embeds dynamic interdependence. The proposed model adds this new dimension to the existing frameworks and calls for attention to the complex local ecology and cultural consistency in realizing user participation.
Directory of Open Access Journals (Sweden)
Hasan ŞAHİN
2002-01-01
Full Text Available This study applies a Poisson regression model to annual Turkish strikes data of the period of 1964-1998. The Poisson regression model is preferable when the dependent variable is count data. Economical and social variables are used as determinants of the number of strikes. Empirical results show that the unemployment rate and a dummy variable that takes 0 before 1980 1 otherwise are significantly affects the number of strikes.
Mazurowski, Maciej A; Baker, Jay A; Barnhart, Huiman X; Tourassi, Georgia D
2010-03-01
The authors propose the framework for an individualized adaptive computer-aided educational system in mammography that is based on user modeling. The underlying hypothesis is that user models can be developed to capture the individual error making patterns of radiologists-in-training. In this pilot study, the authors test the above hypothesis for the task of breast cancer diagnosis in mammograms. The concept of a user model was formalized as the function that relates image features to the likelihood/extent of the diagnostic error made by a radiologist-in-training and therefore to the level of difficulty that a case will pose to the radiologist-in-training (or "user"). Then, machine learning algorithms were implemented to build such user models. Specifically, the authors explored k-nearest neighbor, artificial neural networks, and multiple regression for the task of building the model using observer data collected from ten Radiology residents at Duke University Medical Center for the problem of breast mass diagnosis in mammograms. For each resident, a user-specific model was constructed that predicts the user's expected level of difficulty for each presented case based on two BI-RADS image features. In the experiments, leave-one-out data handling scheme was applied to assign each case to a low-predicted-difficulty or a high-predicted-difficulty group for each resident based on each of the three user models. To evaluate whether the user model is useful in predicting difficulty, the authors performed statistical tests using the generalized estimating equations approach to determine whether the mean actual error is the same or not between the low-predicted-difficulty group and the high-predicted-difficulty group. When the results for all observers were pulled together, the actual errors made by residents were statistically significantly higher for cases in the high-predicted-difficulty group than for cases in the low-predicted-difficulty group for all modeling
System Dynamics (SD) models are useful for holistic integration of data to evaluate indirect and cumulative effects and inform decisions. Complex SD models can provide key insights into how decisions affect the three interconnected pillars of sustainability. However, the complexi...
User's guide for waste tank corrosion data model code
International Nuclear Information System (INIS)
Mackey, D.B.; Divine, J.R.
1986-12-01
Corrosion tests were conducted on A-516 and A-537 carbon steel in simulated Double Shell Slurry, Future PUREX, and Hanford Facilities wastes. The corrosion rate data, gathered between 25 and 180 0 C, were statistically ''modeled'' for each waste; a fourth model was developed that utilized the combined data. The report briefly describes the modeling procedure and details on how to access information through a computerized data system. Copies of the report and operating information may be obtained from the author (DB Mackey) at 509-376-9844 of FTS 444-9844
Amershi, Saleema; Conati, Cristina
2009-01-01
In this paper, we present a data-based user modeling framework that uses both unsupervised and supervised classification to build student models for exploratory learning environments. We apply the framework to build student models for two different learning environments and using two different data sources (logged interface and eye-tracking data).…
The Poisson equation on Klein surfaces
Directory of Open Access Journals (Sweden)
Monica Rosiu
2016-04-01
Full Text Available We obtain a formula for the solution of the Poisson equation with Dirichlet boundary condition on a region of a Klein surface. This formula reveals the symmetric character of the solution.
Poisson point processes imaging, tracking, and sensing
Streit, Roy L
2010-01-01
This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.
Headwater Benefits Energy Gains (HWBEG) model description and users manual
Energy Technology Data Exchange (ETDEWEB)
Perlack, R D; Turhollow, Jr, A F; Cohn, S M; Das, S; Rizy, C G; Tulley, Jr, J H; Kraemer, R D
1984-02-01
The Headwater Benefits Energy Gains (HWBEG) Model was developed for computing energy gains in complex regulated river basins. The model requires daily data on streamflows, storage changes at reservoirs, and power generation at hydroelectric plants. Mathematical relationships are derived from these data to explain the operation of storage reservoirs and the rating of hydropower plants. The HWBEG model assembles streamflows that would have been theoretically available for energy production under varying conditions of upstream reservoir operation. The model first eliminates all upstream storage changes to simulate an unregulated natural flow condition. The storage changes of upstream reservoirs are then added back to the unregulated natural flow in a sequence corresponding to the time the reservoirs began affecting streamflows. As each reservoir is added back to the river system, the hydropower generation attributable to that particular flow is calculated using the reservoir operating rules and the hydropower plant rating curves. Energy gains are determined by subtracting generation with from generation without the chronologically sequenced and combinatorial effects of the upstream storage reservoirs. The energy gains attributable to each storage reservoir are totalled yearly and become the basis for apportioning the cost of operating the upstream federal reservoir or headwater improvement. This report describes the operation of the model including the assembly of the requisite data bases and the use of the computer code.
A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents.
Griol, David; Callejas, Zoraida
2016-01-01
Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user's intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user's needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users.
LEEM - a lake energy and evaporation model user's manual
International Nuclear Information System (INIS)
Barry, P.J.; Robertson, E.
1983-11-01
LEEM is a simplified one-dimensional computer model of the energy budgets of lakes. It is intended to be used operationally to estimate evaporation rates averaged over several days using synoptic meteorological data with only the initial water temperatures being specified. These may usually be taken to be 4 deg. C a few days after spring break-up. This report describes the theoretical basis of the model and the algorithms by which these are converted to computer code. The code itself is included together with an exemplary set of data cards and the corresponding output
Mars Global Reference Atmospheric Model 2010 Version: Users Guide
Justh, H. L.
2014-01-01
This Technical Memorandum (TM) presents the Mars Global Reference Atmospheric Model 2010 (Mars-GRAM 2010) and its new features. Mars-GRAM is an engineering-level atmospheric model widely used for diverse mission applications. Applications include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Additionally, this TM includes instructions on obtaining the Mars-GRAM source code and data files as well as running Mars-GRAM. It also contains sample Mars-GRAM input and output files and an example of how to incorporate Mars-GRAM as an atmospheric subroutine in a trajectory code.
Offshore and coastal dispersion (OCD) model. Users guide
International Nuclear Information System (INIS)
Hanna, S.R.; Schulman, L.L.; Paine, R.J.; Pleim, J.E.
1984-09-01
The Offshore and Coastal Dispersion (OCD) model was adapted from the EPA guideline model MPTER to simulate the effect of offshore emissions from point sources in coastal regions. Modifications were made to incorporate overwater plume transport and dispersion as well as changes that occur as the plume crosses the shoreline. Hourly meteorological data are needed from overwater and overland locations. Turbulence intensities are used but are not mandatory. For overwater dispersion, the turbulence intensities are parameterized from boundary-layer similarity relationships if they are not measured. Specifications of emission characteristics and receptor locations are the same as for MPTER; 250 point sources and 180 receptors may be used
Toolkit for Conceptual Modeling (TCM): User's Guide and Reference
Dehne, F.; Wieringa, Roelf J.
1997-01-01
The Toolkit for Conceptual Modeling (TCM) is a suite of graphical editors for a number of graphical notation systems that are used in software specification methods. The notations can be used to represent the conceptual structure of the software - hence the name of the suite. This manual describes
DEFF Research Database (Denmark)
Leroyer, Patrick
The purpose of this article is to establish new proposals for the lexicographic process and the involvement of experts and users in the construction of online specialised dictionaries. It is argued that the ENeL action should also have a view to the development of innovative theories...... and methodologies for the construction of online specialised dictionaries, and a new model for user and expert involvement is proposed....
Petroleum Refinery Jobs and Economic Development Impact (JEDI) Model User Reference Guide
Energy Technology Data Exchange (ETDEWEB)
Goldberg, Marshall [MRG and Associates, Nevada City, CA (United States)
2013-12-31
The Jobs and Economic Development Impact (JEDI) models, developed through the National Renewable Energy Laboratory (NREL), are user-friendly tools utilized to estimate the economic impacts at the local level of constructing and operating fuel and power generation projects for a range of conventional and renewable energy technologies. The JEDI Petroleum Refinery Model User Reference Guide was developed to assist users in employing and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted. Based on project-specific inputs from the user, the model estimates job creation, earning and output (total economic activity) for a given petroleum refinery. This includes the direct, indirect and induced economic impacts to the local economy associated with the refinery's construction and operation phases. Project cost and job data used in the model are derived from the most current cost estimations available. Local direct and indirect economic impacts are estimated using economic multipliers derived from IMPLAN software. By determining the regional economic impacts and job creation for a proposed refinery, the JEDI Petroleum Refinery model can be used to field questions about the added value refineries may bring to the local community.
Novel Virtual User Models of Mild Cognitive Impairment for Simulating Dementia
Directory of Open Access Journals (Sweden)
Sofia Segkouli
2015-01-01
Full Text Available Virtual user modeling research has attempted to address critical issues of human-computer interaction (HCI such as usability and utility through a large number of analytic, usability-oriented approaches as cognitive models in order to provide users with experiences fitting to their specific needs. However, there is demand for more specific modules embodied in cognitive architecture that will detect abnormal cognitive decline across new synthetic task environments. Also, accessibility evaluation of graphical user interfaces (GUIs requires considerable effort for enhancing ICT products accessibility for older adults. The main aim of this study is to develop and test virtual user models (VUM simulating mild cognitive impairment (MCI through novel specific modules, embodied at cognitive models and defined by estimations of cognitive parameters. Well-established MCI detection tests assessed users’ cognition, elaborated their ability to perform multitasks, and monitored the performance of infotainment related tasks to provide more accurate simulation results on existing conceptual frameworks and enhanced predictive validity in interfaces’ design supported by increased tasks’ complexity to capture a more detailed profile of users’ capabilities and limitations. The final outcome is a more robust cognitive prediction model, accurately fitted to human data to be used for more reliable interfaces’ evaluation through simulation on the basis of virtual models of MCI users.
Poisson's ratio and Young's modulus of lipid bilayers in different phases
Directory of Open Access Journals (Sweden)
Tayebeh eJadidi
2014-04-01
Full Text Available A general computational method is introduced to estimate the Poisson's ratio for membranes with small thickness.In this method, the Poisson's ratio is calculated by utilizing a rescaling of inter-particle distancesin one lateral direction under periodic boundary conditions. As an example for the coarse grained lipid model introduced by Lenz and Schmid, we calculate the Poisson's ratio in the gel, fluid, and interdigitated phases. Having the Poisson's ratio, enable us to obtain the Young's modulus for the membranes in different phases. The approach may be applied to other membranes such as graphene and tethered membranes in orderto predict the temperature dependence of its Poisson's ratio and Young's modulus.
Poisson structure of dynamical systems with three degrees of freedom
Gümral, Hasan; Nutku, Yavuz
1993-12-01
It is shown that the Poisson structure of dynamical systems with three degrees of freedom can be defined in terms of an integrable one-form in three dimensions. Advantage is taken of this fact and the theory of foliations is used in discussing the geometrical structure underlying complete and partial integrability. Techniques for finding Poisson structures are presented and applied to various examples such as the Halphen system which has been studied as the two-monopole problem by Atiyah and Hitchin. It is shown that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a nontrivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of three-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the SL(2,R) structure is a quadratic unfolding of an integrable one-form in 3+1 dimensions. It is shown that the existence of a vector field compatible with the flow is a powerful tool in the investigation of Poisson structure and some new techniques for incorporating arbitrary constants into the Poisson one-form are presented herein. This leads to some extensions, analogous to q extensions, of Poisson structure. The Kermack-McKendrick model and some of its generalizations describing the spread of epidemics, as well as the integrable cases of the Lorenz, Lotka-Volterra, May-Leonard, and Maxwell-Bloch systems admit globally integrable bi-Hamiltonian structure.
User manual of nuclide dispersion in phreatic aquifers model
International Nuclear Information System (INIS)
Rives, D.E.
1999-01-01
The Nuclide Dispersion in Phreatic Aquifers (DRAF) model was developed in the 'Division Estudios Ambientales' of the 'Gerencia de Seguridad Radiologica y Nuclear, Comision Nacional de Energia Atomica' (1991), for the Safety Assessment of Near Surface Radioactive Waste Disposal Facilities. Afterwards, it was modified in several opportunities, adapting it to a number of application conditions. The 'Manual del usuario del codigo DRAF' here presented is a reference document for the use of the last three versions of the code developed for the 'Autoridad Regulatoria Nuclear' between 1995 and 1996. The DRAF model solves the three dimension's solute transport equation for porous media by the finite differences method. It takes into account the advection, dispersion, radioactive decay, and retention in the solid matrix processes, and has multiple possibilities for the source term. There are three versions of the model, two of them for the saturated zone and one for the unsaturated zone. All the versions have been verified in different conditions, and have been applied in exercises of the International Atomic Energy Agency and also in real cases. (author)
A 3D City Model Used as User-interface for an Energy-system
DEFF Research Database (Denmark)
Kjems, Erik
2011-01-01
At CUPUM 2009 the project “Object Oriented Visualization of Urban Energy Consumption” was presented, explaining the technology behind the visualization of an energy-model connected to a 3D city model. This paper presents the subsequent work involving the final design, the user involvement...... and the overall results after the system has been used at the Bright Green Exhibition connected to the COP15 conference in Copenhagen. This paper presents the empirical findings of the attempt to use a 3D city model as user-interface. The system gave the user the possibility to try out different scenarios...... of combinations of the energy-consumption and energy-production for an entire city. The interface was supposed to help especially nonprofessionals, among them politicians, to better perceive the numbers and graphs adjoining the 3D model in a combined view. Only very few systems have been developed for this kind...
Directory of Open Access Journals (Sweden)
Darko Medved
2015-01-01
Full Text Available With the introduction of Solvency II a consistent market approach to the valuation of insurance assets and liabilities is required. For the best estimate of life annuity provisions one should estimate the longevity risk of the insured population in Slovenia. In this paper the current minimum standard in Slovenia for calculating pension annuities is tested using the Lee-Carter model. In particular, the mortality of the Slovenian population is projected using the best fit from the stochastic mortality projections method. The projected mortality statistics are then corrected with the selection effect and compared with the current minimum standard.
IoT-Based User-Driven Service Modeling Environment for a Smart Space Management System
Choi, Hoan-Suk; Rhee, Woo-Seop
2014-01-01
The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service. PMID:25420153
IoT-based user-driven service modeling environment for a smart space management system.
Choi, Hoan-Suk; Rhee, Woo-Seop
2014-11-20
The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service.
Enhanced UWB Radio Channel Model for Short-Range Communication Scenarios Including User Dynamics
DEFF Research Database (Denmark)
Kovacs, Istvan Zsolt; Nguyen, Tuan Hung; Eggers, Patrick Claus F.
2005-01-01
channel model represents an enhancement of the existing IEEE 802.15.3a/4a PAN channel model, where antenna and user-proximity effects are not included. Our investigations showed that significant variations of the received wideband power and time-delay signal clustering are possible due the human body...
DEFF Research Database (Denmark)
Dalgaard, Jens; Pena, Jose; Kocka, Tomas
2004-01-01
We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...
O'Neill, Martin; Palmer, Adrian; Wright, Christine
2003-01-01
Disconfirmation models of online service measurement seek to define service quality as the difference between user expectations of the service to be received and perceptions of the service actually received. Two such models-inferred and direct disconfirmation-for measuring quality of the online experience are compared (WebQUAL, SERVQUAL). Findings…
Introducing a new open source GIS user interface for the SWAT model
The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...
A user-friendly model for spray drying to aid pharmaceutical product development
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Sustainability-based decision making is a challenging process that requires balancing trade-offs among social, economic, and environmental components. System Dynamic (SD) models can be useful tools to inform sustainability-based decision making because they provide a holistic co...
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
Investigation of Random Switching Driven by a Poisson Point Process
DEFF Research Database (Denmark)
Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef
2015-01-01
This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....
Events in time: Basic analysis of Poisson data
Energy Technology Data Exchange (ETDEWEB)
Engelhardt, M.E.
1994-09-01
The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.
Gonzales-Barron, Ursula; Zwietering, Marcel H; Butler, Francis
2013-02-01
This study proposes a novel step-wise methodology for the derivation of a sampling plan by variables for food production systems characterised by relatively low concentrations of the inspected microorganism. After representing the universe of contaminated batches by modelling the between-batch and within-batch variability in microbial counts, a tolerance criterion defining batch acceptability (i.e., up to a tolerance percentage of the food units having microbial concentrations lower or equal to a critical concentration) is established to delineate a limiting quality contour that separates satisfactory from unsatisfactory batches. The problem consists then of finding the optimum decision criterion - arithmetic mean of the analytical results (microbiological limit, m(L)) and the sample size (n) - that satisfies a pre-defined level of confidence measured on the samples' mean distributions from all possible true within-batch distributions. This is approached by obtaining decision landscape curves representing collectively the conditional and joint producer's and consumer's risks at different microbiological limits along with confidence intervals representing uncertainty due to the propagated between-batch variability. Whilst the method requires a number of risk management decisions to be made such as the objective of the sampling plan (GMP-based or risk-based), the modality of derivation, the tolerance criterion or level of safety, and the statistical level of confidence, the proposed method can be used when past monitoring data are available so as to produce statistically-sound dynamic sampling plans with optimised efficiency and discriminatory power. For the illustration of Enterobacteriaceae concentrations on Irish sheep carcasses, a sampling regime of n=10 and m(L)=17.5CFU/cm(2) is recommended to ensure that the producer has at least a 90% confidence of accepting a satisfactory batch whilst the consumer at least a 97.5% confidence that a batch will not be
A generalized Poisson solver for first-principles device simulations
Energy Technology Data Exchange (ETDEWEB)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch [Nanoscale Simulations, ETH Zürich, 8093 Zürich (Switzerland); Brück, Sascha; Luisier, Mathieu [Integrated Systems Laboratory, ETH Zürich, 8092 Zürich (Switzerland)
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.
Directory of Open Access Journals (Sweden)
Svante Stadler
2009-01-01
Full Text Available Users of cochlear implants (CIs vary widely in their ability to recognize speech in noisy conditions. There are many factors that may influence their performance. We have investigated to what degree it can be explained by the users' ability to discriminate spectral shapes. A speech recognition task has been simulated using both a simple and a complex models of CI hearing. The models were individualized by adapting their parameters to fit the results of a spectral discrimination test. The predicted speech recognition performance was compared to experimental results, and they were significantly correlated. The presented framework may be used to simulate the effects of changing the CI encoding strategy.
User-defined Material Model for Thermo-mechanical Progressive Failure Analysis
Knight, Norman F., Jr.
2008-01-01
Previously a user-defined material model for orthotropic bimodulus materials was developed for linear and nonlinear stress analysis of composite structures using either shell or solid finite elements within a nonlinear finite element analysis tool. Extensions of this user-defined material model to thermo-mechanical progressive failure analysis are described, and the required input data are documented. The extensions include providing for temperature-dependent material properties, archival of the elastic strains, and a thermal strain calculation for materials exhibiting a stress-free temperature.
Geothermal loan guaranty cash flow model: description and users' manual
Energy Technology Data Exchange (ETDEWEB)
Keimig, M.A.; Rosenberg, J.I.; Entingh, D.J.
1980-11-01
This is the users guide for the Geothermal Loan Guaranty Cash Flow Model (GCFM). GCFM is a Fortran code which designs and costs geothermal fields and electric power plants. It contains a financial analysis module which performs life cycle costing analysis taking into account various types of taxes, costs and financial structures. The financial module includes a discounted cash flow feature which calculates a levelized breakeven price for each run. The user's guide contains descriptions of the data requirements and instructions for using the model.
Formal Model for Data Dependency Analysis between Controls and Actions of a Graphical User Interface
Directory of Open Access Journals (Sweden)
SKVORC, D.
2012-02-01
Full Text Available End-user development is an emerging computer science discipline that provides programming paradigms, techniques, and tools suitable for users not trained in software engineering. One of the techniques that allow ordinary computer users to develop their own applications without the need to learn a classic programming language is a GUI-level programming based on programming-by-demonstration. To build wizard-based tools that assist users in application development and to verify the correctness of user programs, a computer-supported method for GUI-level data dependency analysis is necessary. Therefore, formal model for GUI representation is needed. In this paper, we present a finite state machine for modeling the data dependencies between GUI controls and GUI actions. Furthermore, we present an algorithm for automatic construction of finite state machine for arbitrary GUI application. We show that proposed state aggregation scheme successfully manages state explosion in state machine construction algorithm, which makes the model applicable for applications with complex GUIs.
A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems.
Directory of Open Access Journals (Sweden)
Shaoming Pan
Full Text Available A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client's requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users' access behaviors and all tiles' relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users' access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods.
A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems.
Pan, Shaoming; Chong, Yanwen; Zhang, Hang; Tan, Xicheng
2017-01-01
A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client's requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users' access behaviors and all tiles' relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users' access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods.
Do Stochastic Traffic Assignment Models Consider Differences in Road Users Utility Functions?
DEFF Research Database (Denmark)
Nielsen, Otto Anker
1996-01-01
The early stochastic traffic assignment models (e.g. Dial, 1971) build on the assump-tion that different routes are independent (the logit-model concept). Thus, the models gave severe problems in networks with overlapping routes. Daganzo & Sheffi (1977) suggested to use probit based models...... to overcome this problem. Sheffi & Powell (1981) presented a practically operational solution algorithm in which the travel resistance for each road segment is adjusted according to a Monte Carlo simulation following the Normal-distribution. By this the road users’ ‘perceived travel resistances’ are simulated....... A similar concept is used as a part of the Sto-chastic User Equilibrium model (SUE) suggested by Daganzo and Sheffi (1977) and operationalized by Sheffi & Powell (1982). In the paper it is discussed whether this way of modelling the ‘perceived travel resistance’ is sufficient to describe the road users...
High order Poisson Solver for unbounded flows
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2015-01-01
This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field...... the equations of fluid mechanics as an example, but can be used in many physical problems to solve the Poisson equation on a rectangular unbounded domain. For the two-dimensional case we propose an infinitely smooth test function which allows for arbitrary high order convergence. Using Gaussian smoothing....... The method was implemented on a rectangular domain using fast Fourier transforms (FFT) to increase computational efficiency. The Poisson solver was extended to directly solve the derivatives of the solution. This is achieved either by including the differential operator in the integration kernel...
Selective Contrast Adjustment by Poisson Equation
Directory of Open Access Journals (Sweden)
Ana-Belen Petro
2013-09-01
Full Text Available Poisson Image Editing is a new technique permitting to modify the gradient vector field of an image, and then to recover an image with a gradient approaching this modified gradient field. This amounts to solve a Poisson equation, an operation which can be efficiently performed by Fast Fourier Transform (FFT. This paper describes an algorithm applying this technique, with two different variants. The first variant enhances the contrast by increasing the gradient in the dark regions of the image. This method is well adapted to images with back light or strong shadows, and reveals details in the shadows. The second variant of the same Poisson technique enhances all small gradients in the image, thus also sometimes revealing details and texture.
Poisson-Jacobi reduction of homogeneous tensors
International Nuclear Information System (INIS)
Grabowski, J; Iglesias, D; Marrero, J C; Padron, E; Urbanski, P
2004-01-01
The notion of homogeneous tensors is discussed. We show that there is a one-to-one correspondence between multivector fields on a manifold M, homogeneous with respect to a vector field Δ on M, and first-order polydifferential operators on a closed submanifold N of codimension 1 such that Δ is transversal to N. This correspondence relates the Schouten-Nijenhuis bracket of multivector fields on M to the Schouten-Jacobi bracket of first-order polydifferential operators on N and generalizes the Poissonization of Jacobi manifolds. Actually, it can be viewed as a super-Poissonization. This procedure of passing from a homogeneous multivector field to a first-order polydifferential operator can also be understood as a sort of reduction; in the standard case-a half of a Poisson reduction. A dual version of the above correspondence yields in particular the correspondence between Δ-homogeneous symplectic structures on M and contact structures on N
The Models of Applying Online Privacy Literacy Strategies: A Case Study of Instagram Girl Users
Directory of Open Access Journals (Sweden)
Abdollah Bicharanlou
2017-09-01
Full Text Available Social networks affect remarkably in the lives of virtual space users. These networks like most human relations involve compromising between self-disclosure and privacy protection. A process which is realized through improving privacy and empowering the user at the personal level. This study aimed to assess strategies based on online privacy literacy. In particular, strategies that Instagram young girls users should employ to achieve the optimum level of privacy. For this purpose, firstly the paradox of privacy, benefits and risks of self-disclosure are explained, then according to online privacy literacy, some social and technological strategies are introduced by which users can solve the “paradox of privacy.” In the result section, after describing the main benefits and risks of self-disclosure by girl users, the current models of using these social and technological strategies to solve the mentioned paradox are discussed. The research method is ethnography based on non-collaborative observation of Instagram pages and semi-structured interviews with 20 girl users of social networks.
Modeling web-based information seeking by users who are blind.
Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti
2011-01-01
This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Exponential Stability of Stochastic Systems with Delay and Poisson Jumps
Directory of Open Access Journals (Sweden)
Wenli Zhu
2014-01-01
Full Text Available This paper focuses on the model of a class of nonlinear stochastic delay systems with Poisson jumps based on Lyapunov stability theory, stochastic analysis, and inequality technique. The existence and uniqueness of the adapted solution to such systems are proved by applying the fixed point theorem. By constructing a Lyapunov function and using Doob’s martingale inequality and Borel-Cantelli lemma, sufficient conditions are given to establish the exponential stability in the mean square of such systems, and we prove that the exponentially stable in the mean square of such systems implies the almost surely exponentially stable. The obtained results show that if stochastic systems is exponentially stable and the time delay is sufficiently small, then the corresponding stochastic delay systems with Poisson jumps will remain exponentially stable, and time delay upper limit is solved by using the obtained results when the system is exponentially stable, and they are more easily verified and applied in practice.
Improved mesh generator for the POISSON Group Codes
International Nuclear Information System (INIS)
Gupta, R.C.
1987-01-01
This paper describes the improved mesh generator of the POISSON Group Codes. These improvements enable one to have full control over the way the mesh is generated and in particular the way the mesh density is distributed throughout this model. A higher mesh density in certain regions coupled with a successively lower mesh density in others keeps the accuracy of the field computation high and the requirements on the computer time and computer memory low. The mesh is generated with the help of codes AUTOMESH and LATTICE; both have gone through a major upgrade. Modifications have also been made in the POISSON part of these codes. We shall present an example of a superconducting dipole magnet to explain how to use this code. The results of field computations are found to be reliable within a few parts in a hundred thousand even in such complex geometries
Directory of Open Access Journals (Sweden)
Shudong Liu
2018-01-01
Full Text Available The rapid growth of location-based services (LBSs has greatly enriched people’s urban lives and attracted millions of users in recent years. Location-based social networks (LBSNs allow users to check-in at a physical location and share daily tips on points of interest (POIs with their friends anytime and anywhere. Such a check-in behavior can make daily real-life experiences spread quickly through the Internet. Moreover, such check-in data in LBSNs can be fully exploited to understand the basic laws of humans’ daily movement and mobility. This paper focuses on reviewing the taxonomy of user modeling for POI recommendations through the data analysis of LBSNs. First, we briefly introduce the structure and data characteristics of LBSNs, and then we present a formalization of user modeling for POI recommendations in LBSNs. Depending on which type of LBSNs data was fully utilized in user modeling approaches for POI recommendations, we divide user modeling algorithms into four categories: pure check-in data-based user modeling, geographical information-based user modeling, spatiotemporal information-based user modeling, and geosocial information-based user modeling. Finally, summarizing the existing works, we point out the future challenges and new directions in five possible aspects.
Equilibrium stochastic dynamics of Poisson cluster ensembles
Directory of Open Access Journals (Sweden)
L.Bogachev
2008-06-01
Full Text Available The distribution μ of a Poisson cluster process in Χ=Rd (with n-point clusters is studied via the projection of an auxiliary Poisson measure in the space of configurations in Χn, with the intensity measure being the convolution of the background intensity (of cluster centres with the probability distribution of a generic cluster. We show that μ is quasi-invariant with respect to the group of compactly supported diffeomorphisms of Χ, and prove an integration by parts formula for μ. The corresponding equilibrium stochastic dynamics is then constructed using the method of Dirichlet forms.
White Noise of Poisson Random Measures
Proske, Frank; Øksendal, Bernt
2002-01-01
We develop a white noise theory for Poisson random measures associated with a Lévy process. The starting point of this theory is a chaos expansion with kernels of polynomial type. We use this to construct the white noise of a Poisson random measure, which takes values in a certain distribution space. Then we show, how a Skorohod/Itô integral for point processes can be represented by a Bochner integral in terms of white noise of the random measure and a Wick product. Further, we apply these co...
Bayesian regression of piecewise homogeneous Poisson processes
Directory of Open Access Journals (Sweden)
Diego Sevilla
2015-12-01
Full Text Available In this paper, a Bayesian method for piecewise regression is adapted to handle counting processes data distributed as Poisson. A numerical code in Mathematica is developed and tested analyzing simulated data. The resulting method is valuable for detecting breaking points in the count rate of time series for Poisson processes. Received: 2 November 2015, Accepted: 27 November 2015; Edited by: R. Dickman; Reviewed by: M. Hutter, Australian National University, Canberra, Australia.; DOI: http://dx.doi.org/10.4279/PIP.070018 Cite as: D J R Sevilla, Papers in Physics 7, 070018 (2015
Exploring the Nature and Implementation Process of User-Centric Business Models
DEFF Research Database (Denmark)
Hienerth, Christoph; Keinz, Peter; Lettl, Christopher
2011-01-01
Recent ICT advances have allowed companies to interact with external stakeholders, especially users, in more efficient and effective ways, with the result that more and more companies are striving to take advantage of these new opportunities and harness their users’ creative potential by integrat......Recent ICT advances have allowed companies to interact with external stakeholders, especially users, in more efficient and effective ways, with the result that more and more companies are striving to take advantage of these new opportunities and harness their users’ creative potential...... by integrating them into core business processes. Successful companies like Threadless or Dell - which were designed to allow user innovation and co-creation from the outset - have clearly demonstrated the potential value of such approaches. However, introducing user-centric value creation processes...... at established companies is a complex task, requiring major adaptations to traditional manufacturer-centered business models. At present, little is known about how such companies can successfully implement user-centric business models: this article explores (1) the success factors for attracting and engaging...
Wang, Chenxu; Guan, Xiaohong; Qin, Tao; Yang, Tao
2015-06-01
Online social network has become an indispensable communication tool in the information age. The development of microblog also provides us a great opportunity to study human dynamics that play a crucial role in the design of efficient communication systems. In this paper we study the characteristics of the tweeting behavior based on the data collected from Sina Microblog. The user activity level is measured to characterize how often a user posts a tweet. We find that the user activity level follows a bimodal distribution. That is, the microblog users tend to be either active or inactive. The inter-tweeting time distribution is then measured at both the aggregate and individual levels. We find that the inter-tweeting time follows a piecewise power law distribution of two tails. Furthermore, the exponents of the two tails have different correlations with the user activity level. These findings demonstrate that the dynamics of the tweeting behavior are heterogeneous in different time scales. We then develop a dynamic model co-driven by the memory and the interest mechanism to characterize the heterogeneity. The numerical simulations validate the model and verify that the short time interval tweeting behavior is driven by the memory mechanism while the long time interval behavior by the interest mechanism.
Involving mental health service users in suicide-related research: a qualitative inquiry model.
Lees, David; Procter, Nicholas; Fassett, Denise; Handley, Christine
2016-03-01
To describe the research model developed and successfully deployed as part of a multi-method qualitative study investigating suicidal service-users' experiences of mental health nursing care. Quality mental health care is essential to limiting the occurrence and burden of suicide, however there is a lack of relevant research informing practice in this context. Research utilising first-person accounts of suicidality is of particular importance to expanding the existing evidence base. However, conducting ethical research to support this imperative is challenging. The model discussed here illustrates specific and more generally applicable principles for qualitative research regarding sensitive topics and involving potentially vulnerable service-users. Researching into mental health service users with first-person experience of suicidality requires stakeholder and institutional support, researcher competency, and participant recruitment, consent, confidentiality, support and protection. Research with service users into their experiences of sensitive issues such as suicidality can result in rich and valuable data, and may also provide positive experiences of collaboration and inclusivity. If challenges are not met, objectification and marginalisation of service-users may be reinforced, and limitations in the evidence base and service provision may be perpetuated.
User Defined Data in the New Analysis Model of the BaBar Experiment
Energy Technology Data Exchange (ETDEWEB)
De Nardo, G.
2005-04-06
The BaBar experiment has recently revised its Analysis Model. One of the key ingredient of BaBar new Analysis Model is the support of the capability to add to the Event Store user defined data, which can be the output of complex computations performed at an advanced stage of a physics analysis, and are associated to analysis objects. In order to provide flexibility and extensibility with respect to object types, template generic programming has been adopted. In this way the model is non-intrusive with respect to reconstruction and analysis objects it manages, not requiring changes in their interfaces and implementations. Technological details are hidden as much as possible to the user, providing a simple interface. In this paper we present some of the limitations of the old model and how they are addressed by the new Analysis Model.
Transmission Line Jobs and Economic Development Impact (JEDI) Model User Reference Guide
Energy Technology Data Exchange (ETDEWEB)
Goldberg, M.; Keyser, D.
2013-10-01
The Jobs and Economic Development Impact (JEDI) models, developed through the National Renewable Energy Laboratory (NREL), are freely available, user-friendly tools that estimate the potential economic impacts of constructing and operating power generation projects for a range of conventional and renewable energy technologies. The Transmission Line JEDI model can be used to field questions about the economic impacts of transmission lines in a given state, region, or local community. This Transmission Line JEDI User Reference Guide was developed to provide basic instruction on operating the model and understanding the results. This guide also provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data contained in the model.
Using a Reference Corpus as a User Model for Focused Information Retrieval
Mishne, G.A.; de Rijke, M.; Jijkoun, V.; van Zwol, R.
2005-01-01
We propose a method for ranking short information nuggets extracted from a text corpus, using another, reliable reference corpus as a user model. We argue that the availability and usage of such additional corpora is common in a number of IR tasks, and apply the method to answering a form of
Using a Reference Corpus as a User Model for Focused Information Retrieval
Mishne, G.A.; de Rijke, M.; Jijkoun, V.
2005-01-01
We propose a method for ranking short information nuggets extracted from a text corpus, using another, reliable reference corpus as a user model. We argue that the availability and usage of such additional corpora is common in a number of IR, tasks, and apply the method to answering a form of
Visual imagery and the user model applied to fuel handling at EBR-II
Energy Technology Data Exchange (ETDEWEB)
Brown-VanHoozer, S.A.
1995-06-01
The material presented in this paper is based on two studies involving visual display designs and the user`s perspective model of a system. The studies involved a methodology known as Neuro-Linguistic Programming (NLP), and its use in expanding design choices which included the ``comfort parameters`` and ``perspective reality`` of the user`s model of the world. In developing visual displays for the EBR-II fuel handling system, the focus would be to incorporate the comfort parameters that overlap from each of the representation systems: visual, auditory and kinesthetic then incorporate the comfort parameters of the most prominent group of the population, and last, blend in the other two representational system comfort parameters. The focus of this informal study was to use the techniques of meta-modeling and synesthesia to develop a virtual environment that closely resembled the operator`s perspective of the fuel handling system of Argonne`s Experimental Breeder Reactor - II. An informal study was conducted using NLP as the behavioral model in a v reality (VR) setting.
Support of surgical process modeling by using adaptable software user interfaces
Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.
2010-03-01
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
Please do not disturb : Modeling user experience for considerate home products
Vastenburg, M.H.
2007-01-01
Products in the home offer ever more functionality. Advances in sensor technology, embedded processing power, and modeling and reasoning software, have enabled everyday products to sense their environment and eventually anticipate user needs. The enabling technology for ambient intelligence is now
A Study on Intelligent User-Centric Logistics Service Model Using Ontology
Directory of Open Access Journals (Sweden)
Saraswathi Sivamani
2014-01-01
Full Text Available Much research has been undergone in the smart logistics environment for the prompt delivery of the product in the right place at the right time. Most of the services were based on time management, routing technique, and location based services. The services in the recent logistics environment aim for situation based logistics service centered around the user by utilizing various information technologies such as mobile devices, computer systems, and GPS. This paper proposes a smart logistics service model for providing user-centric intelligent logistics service by utilizing smartphones in a smart environment. We also develop an OWL based ontology model for the smart logistics for the better understanding among the context information. In addition to basic delivery information, the proposed service model makes use of the location and situation information of the delivery vehicle and user, to draw the route information according to the user’s requirement. With the increase of internet usage, the real-time situations are received which helps to create a more reliable relationship, owing to the Internet of Things. Through this service model, it is possible to engage in the development of various IT and logistics convergence services based on situation information between the deliverer and user which occurs in real time.
Supersymmetric quantum corrections and Poisson-Lie T-duality
International Nuclear Information System (INIS)
Assaoui, F.; Lhallabi, T.; Abdus Salam International Centre for Theoretical Physics, Trieste
2000-07-01
The quantum actions of the (4,4) supersymmetric non-linear sigma model and its dual in the Abelian case are constructed by using the background superfield method. The propagators of the quantum superfield and its dual and the gauge fixing actions of the original and dual (4,4) supersymmetric sigma models are determined. On the other hand, the BRST transformations are used to obtain the quantum dual action of the (4,4) supersymmetric nonlinear sigma model in the sense of Poisson-Lie T-duality. (author)
Soft network materials with isotropic negative Poisson's ratios over large strains.
Liu, Jianxing; Zhang, Yihui
2018-01-31
Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.
Switch Based Opportunistic Spectrum Access for General Primary User Traffic Model
Gaaloul, Fakhreddine
2012-06-18
This letter studies cognitive radio transceiver that can opportunistically use the available channels of primary user (PU). Specifically, we investigate and compare two different opportunistic channel access schemes. The first scheme applies when the secondary user (SU) has access to only one channel. The second scheme, based on channel switching mechanism, applies when the SU has access to multiple channels but can at a given time monitor and access only one channel. For these access schemes, we derive the exact analytical results for the novel performance metrics of average access time and average waiting time under general PU traffic models.
Jobs and Economic Development Impact (JEDI) User Reference Guide: Fast Pyrolysis Biorefinery Model
Energy Technology Data Exchange (ETDEWEB)
Zhang, Yimin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Goldberg, Marshall [MRG and Associates, Nevada City, CA (United States)
2015-02-01
This guide -- the JEDI Fast Pyrolysis Biorefinery Model User Reference Guide -- was developed to assist users in operating and understanding the JEDI Fast Pyrolysis Biorefinery Model. The guide provides information on the model's underlying methodology, as well as the parameters and data sources used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features and a discussion of how the results should be interpreted. Based on project-specific inputs from the user, the JEDI Fast Pyrolysis Biorefinery Model estimates local (e.g., county- or state-level) job creation, earnings, and output from total economic activity for a given fast pyrolysis biorefinery. These estimates include the direct, indirect and induced economic impacts to the local economy associated with the construction and operation phases of biorefinery projects.Local revenue and supply chain impacts as well as induced impacts are estimated using economic multipliers derived from the IMPLAN software program. By determining the local economic impacts and job creation for a proposed biorefinery, the JEDI Fast Pyrolysis Biorefinery Model can be used to field questions about the added value biorefineries might bring to a local community.
Spatial Nonhomogeneous Poisson Process in Corrosion Management
López De La Cruz, J.; Kuniewski, S.P.; Van Noortwijk, J.M.; Guriérrez, M.A.
2008-01-01
A method to test the assumption of nonhomogeneous Poisson point processes is implemented to analyze corrosion pit patterns. The method is calibrated with three artificially generated patterns and manages to accurately assess whether a pattern distribution is random, regular, or clustered. The
Efficient information transfer by Poisson neurons
Czech Academy of Sciences Publication Activity Database
Košťál, Lubomír; Shinomoto, S.
2016-01-01
Roč. 13, č. 3 (2016), s. 509-520 ISSN 1547-1063 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : information capacity * Poisson neuron * metabolic cost * decoding error Subject RIV: BD - Theory of Information Impact factor: 1.035, year: 2016
Poisson brackets for fluids and plasmas
International Nuclear Information System (INIS)
Morrison, P.J.
1982-01-01
Noncanonical yet Hamiltonian descriptions are presented of many of the non-dissipative field equations that govern fluids and plasmas. The dynamical variables are the usually encountered physical variables. These descriptions have the advantage that gauge conditions are absent, but at the expense of introducing peculiar Poisson brackets. Clebsch-like potential descriptions that reverse this situations are also introduced
Almost Poisson integration of rigid body systems
International Nuclear Information System (INIS)
Austin, M.A.; Krishnaprasad, P.S.; Li-Sheng Wang
1993-01-01
In this paper we discuss the numerical integration of Lie-Poisson systems using the mid-point rule. Since such systems result from the reduction of hamiltonian systems with symmetry by lie group actions, we also present examples of reconstruction rules for the full dynamics. A primary motivation is to preserve in the integration process, various conserved quantities of the original dynamics. A main result of this paper is an O(h 3 ) error estimate for the Lie-Poisson structure, where h is the integration step-size. We note that Lie-Poisson systems appear naturally in many areas of physical science and engineering, including theoretical mechanics of fluids and plasmas, satellite dynamics, and polarization dynamics. In the present paper we consider a series of progressively complicated examples related to rigid body systems. We also consider a dissipative example associated to a Lie-Poisson system. The behavior of the mid-point rule and an associated reconstruction rule is numerically explored. 24 refs., 9 figs
Dimensional reduction for generalized Poisson brackets
Acatrinei, Ciprian Sorin
2008-02-01
We discuss dimensional reduction for Hamiltonian systems which possess nonconstant Poisson brackets between pairs of coordinates and between pairs of momenta. The associated Jacobi identities imply that the dimensionally reduced brackets are always constant. Some examples are given alongside the general theory.
DEFF Research Database (Denmark)
Spataru, Sergiu; Sera, Dezso; Kerekes, Tamas
2012-01-01
This paper presents a set of laboratory tools aimed to support students with various backgrounds (no programming) to understand photovoltaic array modelling and characterization techniques. A graphical user interface (GUI) has been developed in Matlab, for modelling PV arrays and characterizing t...... the effect of different types of parameters and operating conditions, on the current-voltage and power-voltage curves. The GUI is supported by experimental investigation and validation on PV module level, with the help of an indoor flash solar simulator.......This paper presents a set of laboratory tools aimed to support students with various backgrounds (no programming) to understand photovoltaic array modelling and characterization techniques. A graphical user interface (GUI) has been developed in Matlab, for modelling PV arrays and characterizing...
The dynamical modeling and simulation analysis of the recommendation on the user-movie network
Zhang, Shujuan; Jin, Zhen; Zhang, Juan
2016-12-01
At present, most research about the recommender system is based on graph theory and algebraic methods, but these methods cannot predict the evolution of the system with time under the recommendation method, and cannot dynamically analyze the long-term utility of the recommendation method. However, these two aspects can be studied by the dynamical method, which essentially investigates the intrinsic evolution mechanism of things, and is widely used to study a variety of actual problems. So, in this paper, network dynamics is used to study the recommendation on the user-movie network, which consists of users and movies, and the movies are watched either by the personal search or through the recommendation. Firstly, dynamical models are established to characterize the personal search and the system recommendation mechanism: the personal search model, the random recommendation model, the preference recommendation model, the degree recommendation model and the hybrid recommendation model. The rationality of the models established is verified by comparing the stochastic simulation with the numerical simulation. Moreover, the validity of the recommendation methods is evaluated by studying the movie degree, which is defined as the number of the movie that has been watched. Finally, we combine the personal search and the recommendation to establish a more general model. The change of the average degree of all the movies is given with the strength of the recommendation. Results show that for each recommendation method, the change of the movie degree is different, and is related to the initial degree of movies, the adjacency matrix A representing the relation between users and movies, the time t. Additionally, we find that in a long time, the degree recommendation is not as good as that in a short time, which fully demonstrates the advantage of the dynamical method. For the whole user-movie system, the preference recommendation is the best.
MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion
Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel
2015-01-01
The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...
Jobs and Economic Development Impact (JEDI) Model: Offshore Wind User Reference Guide
Energy Technology Data Exchange (ETDEWEB)
Lantz, E.; Goldberg, M.; Keyser, D.
2013-06-01
The Offshore Wind Jobs and Economic Development Impact (JEDI) model, developed by NREL and MRG & Associates, is a spreadsheet based input-output tool. JEDI is meant to be a user friendly and transparent tool to estimate potential economic impacts supported by the development and operation of offshore wind projects. This guide describes how to use the model as well as technical information such as methodology, limitations, and data sources.
The predictive model on the user reaction time using the information similarity
International Nuclear Information System (INIS)
Lee, Sung Jin; Heo, Gyun Young; Chang, Soon Heung
2005-01-01
Human performance is frequently degraded because people forget. Memory is one of brain processes that are important when trying to understand how people process information. Although a large number of studies have been made on the human performance, little is known about the similarity effect in human performance. The purpose of this paper is to propose and validate the quantitative and predictive model on the human response time in the user interface with the concept of similarity. However, it is not easy to explain the human performance with only similarity or information amount. We are confronted by two difficulties: making the quantitative model on the human response time with the similarity and validating the proposed model by experimental work. We made the quantitative model based on the Hick's law and the law of practice. In addition, we validated the model with various experimental conditions by measuring participants' response time in the environment of computer-based display. Experimental results reveal that the human performance is improved by the user interface's similarity. We think that the proposed model is useful for the user interface design and evaluation phases
Dabek, Filip; Caban, Jesus J
2017-01-01
Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
Semiclassical limit and well-posedness of nonlinear Schrodinger-Poisson systems
Directory of Open Access Journals (Sweden)
Hailiang Li
2003-09-01
Full Text Available This paper concerns the well-posedness and semiclassical limit of nonlinear Schrodinger-Poisson systems. We show the local well-posedness and the existence of semiclassical limit of the two models for initial data with Sobolev regularity, before shocks appear in the limit system. We establish the existence of a global solution and show the time-asymptotic behavior of a classical solutions of Schrodinger-Poisson system for a fixed re-scaled Planck constant.
An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model
Gawron, C.
An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.
Quantitative agent based model of user behavior in an Internet discussion forum.
Sobkowicz, Pawel
2013-01-01
The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O) individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree), the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables.
Quantitative agent based model of user behavior in an Internet discussion forum.
Directory of Open Access Journals (Sweden)
Pawel Sobkowicz
Full Text Available The paper presents an agent based simulation of opinion evolution, based on a nonlinear emotion/information/opinion (E/I/O individual dynamics, to an actual Internet discussion forum. The goal is to reproduce the results of two-year long observations and analyses of the user communication behavior and of the expressed opinions and emotions, via simulations using an agent based model. The model allowed to derive various characteristics of the forum, including the distribution of user activity and popularity (outdegree and indegree, the distribution of length of dialogs between the participants, their political sympathies and the emotional content and purpose of the comments. The parameters used in the model have intuitive meanings, and can be translated into psychological observables.
Information support model and its impact on utility, satisfaction and loyalty of users
Directory of Open Access Journals (Sweden)
Sead Šadić
2016-11-01
Full Text Available In today’s modern age, information systems are of vital importance for successful performance of any organization. The most important role of any information system is its information support. This paper develops an information support model and presents the results of the survey examining the effects of such model. The survey was performed among the employees of Brčko District Government and comprised three phases. The first phase assesses the influence of the quality of information support and information on information support when making decisions. The second phase examines the impact of information support when making decisions on the perceived availability and user satisfaction with information support. The third phase examines the effects of perceived usefulness as well as information support satisfaction on user loyalty. The model is presented using six hypotheses, which were tested by means of a multivariate regression analysis. The demonstrated model shows that the quality of information support and information is of vital importance in the decision-making process. The perceived usefulness and customer satisfaction are of vital importance for continuous usage of information support. The model is universal, and if slightly modified, it can be used in any sphere of life where satisfaction is measured for clients and users of some service.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean
2015-02-01
Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices 20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
Energy Technology Data Exchange (ETDEWEB)
Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S. [Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Genovese, L. [University of Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Andreussi, O. [Institute of Computational Science, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6904 Lugano (Switzerland); Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland); Marzari, N. [Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland)
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
International Nuclear Information System (INIS)
Fisicaro, G.; Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N.
2016-01-01
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes
Seyoum, Awoke; Ndlovu, Principal; Zewotir, Temesgen
2016-01-01
CD4 cells are a type of white blood cells that plays a significant role in protecting humans from infectious diseases. Lack of information on associated factors on CD4 cell count reduction is an obstacle for improvement of cells in HIV positive adults. Therefore, the main objective of this study was to investigate baseline factors that could affect initial CD4 cell count change after highly active antiretroviral therapy had been given to adult patients in North West Ethiopia. A retrospective cross-sectional study was conducted among 792 HIV positive adult patients who already started antiretroviral therapy for 1 month of therapy. A Chi square test of association was used to assess of predictor covariates on the variable of interest. Data was secondary source and modeled using generalized linear models, especially Quasi-Poisson regression. The patients' CD4 cell count changed within a month ranged from 0 to 109 cells/mm 3 with a mean of 15.9 cells/mm 3 and standard deviation 18.44 cells/mm 3 . The first month CD4 cell count change was significantly affected by poor adherence to highly active antiretroviral therapy (aRR = 0.506, P value = 2e -16 ), fair adherence (aRR = 0.592, P value = 0.0120), initial CD4 cell count (aRR = 1.0212, P value = 1.54e -15 ), low household income (aRR = 0.63, P value = 0.671e -14 ), middle income (aRR = 0.74, P value = 0.629e -12 ), patients without cell phone (aRR = 0.67, P value = 0.615e -16 ), WHO stage 2 (aRR = 0.91, P value = 0.0078), WHO stage 3 (aRR = 0.91, P value = 0.0058), WHO stage 4 (0876, P value = 0.0214), age (aRR = 0.987, P value = 0.000) and weight (aRR = 1.0216, P value = 3.98e -14 ). Adherence to antiretroviral therapy, initial CD4 cell count, household income, WHO stages, age, weight and owner of cell phone played a major role for the variation of CD4 cell count in our data. Hence, we recommend a close follow-up of patients to adhere the prescribed medication for
Optimized thick-wall cylinders by virtue of Poisson's ratio selection
International Nuclear Information System (INIS)
Whitty, J.P.M.; Henderson, B.; Francis, J.; Lloyd, N.
2011-01-01
The principal stress distributions in thick-wall cylinders due to variation in the Poisson's ratio are predicted using analytical and finite element methods. Analyses of appropriate brittle and ductile failure criteria show that under the isochoric pressure conditions investigated that auextic (i.e. those possessing a negative Poisson's ratio) materials act as stress concentrators; hence they are predicted to fail before their conventional (i.e. possessing a positive Poisson's ratio) material counterparts. The key finding of the work presented shows that for constrained thick-wall cylinders the maximum tensile principal stress can vanish at a particular Poisson's ratio and aspect ratio. This phenomenon is exploited in order to present an optimized design criterion for thick-wall cylinders. Moreover, via the use of a cogent finite element model, this criterion is also shown to be applicable for the design of micro-porous materials.
Remarks on 'Poisson ratio beyond the limits of the elasticity theory'
International Nuclear Information System (INIS)
Wojciechowski, K.W.
2002-12-01
The non-chiral, elastically isotropic model exhibits Poison ratios in the range -1 ≤ σ ≤ 1 without any molecular rotation. The centres of discs-atoms are replaced in the vertices of a perfect triangle of the side length equal to σ. The positive sign of the Lame constant λ is not necessary for the stability of an isotropic system at any dimensionality. As the upper limit for the Poisson ratio in 2D isotropic systems is 1, crystalline or polycrystalline 2D systems can be obtained having the Poisson ratio exceeding 1/2. Both the traditional theory of elasticity and the Cosserat one exclude Poisson ratios exceeding 1/2 in 3D isotropic systems. Neighter anisotropy nor rotation are necessary to obtain extreme values of the Poisson ratio (author)
Models for Ballistic Wind Measurement Error Analysis. Volume II. Users’ Manual.
1983-01-01
TEST CHART NATIONAL li ’A il (If IANP) ARDl A -CR-83-0008-1 Reports Control Symbol OSO - 1366 MODELS FOR BALLISTIC WIND MEASUREMENTERROR ANALYSIS...AD-A129 360 MODELS FOR BALLSTIC WIND MEASUREMENT ERROR ANALYSIS VO UME 11USERS’ MAN..U) NEW REXICO STATE UNIV LAS U SS CRUCES PHYSICAL SCIENCE LAR...ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER SASL-CR-83-0008-1 4. TITLE (and Subtitle) 5 TYPE OF REPORT & PERIOD COVERED MODELS FOR BALLISTIC WIND
User's Manual for Data for Validating Models for PV Module Performance
Energy Technology Data Exchange (ETDEWEB)
Marion, W.; Anderberg, A.; Deline, C.; Glick, S.; Muller, M.; Perrin, G.; Rodriguez, J.; Rummel, S.; Terwilliger, K.; Silverman, T. J.
2014-04-01
This user's manual describes performance data measured for flat-plate photovoltaic (PV) modules installed in Cocoa, Florida, Eugene, Oregon, and Golden, Colorado. The data include PV module current-voltage curves and associated meteorological data for approximately one-year periods. These publicly available data are intended to facilitate the validation of existing models for predicting the performance of PV modules, and for the development of new and improved models. For comparing different modeling approaches, using these public data will provide transparency and more meaningful comparisons of the relative benefits.
Linear odd Poisson bracket on Grassmann variables
International Nuclear Information System (INIS)
Soroka, V.A.
1999-01-01
A linear odd Poisson bracket (antibracket) realized solely in terms of Grassmann variables is suggested. It is revealed that the bracket, which corresponds to a semi-simple Lie group, has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, the second and the third orders with respect to Grassmann derivatives, in contrast with the canonical odd Poisson bracket having the only Grassmann-odd nilpotent differential Δ-operator of the second order. It is shown that these Δ-like operators together with a Grassmann-odd nilpotent Casimir function of this bracket form a finite-dimensional Lie superalgebra. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Delimitation of the user in the creation stages, modeling and prototyping of the clothing product
Directory of Open Access Journals (Sweden)
Elen Makara
2016-12-01
Full Text Available Because of complaints of clothes that do not fit well and bother, or even the difficulty of acquiring a suitable outfit for an occasion, to the clothing industry may be failing to consider the user in developing their products. The aim of this paper was to determine whether the steps for creating, modeling and prototyping of garment products the user is being considered, as for a design project is right to consider it at all stages of development. For this a literature review and interviews with professionals was held. With the results we can see the difficulty that professionals have to define the public to create the pieces. We also found that most professionals understand the importance of measures table however end up adjusting the empirically, and the realization of prototype testing many choose models that does not match the reality of the public company
EpiPOD : community vaccination and dispensing model user's guide.
Energy Technology Data Exchange (ETDEWEB)
Berry, M.; Samsa, M.; Walsh, D.; Decision and Information Sciences
2009-01-09
EpiPOD is a modeling system that enables local, regional, and county health departments to evaluate and refine their plans for mass distribution of antiviral and antibiotic medications and vaccines. An intuitive interface requires users to input as few or as many plan specifics as are available in order to simulate a mass treatment campaign. Behind the input interface, a system dynamics model simulates pharmaceutical supply logistics, hospital and first-responder personnel treatment, population arrival dynamics and treatment, and disease spread. When the simulation is complete, users have estimates of the number of illnesses in the population at large, the number of ill persons seeking treatment, and queuing and delays within the mass treatment system--all metrics by which the plan can be judged.
Alves, Vânia Sampaio
2009-11-01
This article aims to characterize health care models for users of alcohol and other drugs in the Brazilian context. Discourse analysis was performed on public drug policy in Brazil from the 1970s. This analysis was contextualized by a brief digression on the main political positions identified in several countries of the world in relation to drug use problems. Beginning in the current decade, drug policies in Brazil have been receptive to harm reduction approaches, resulting in reorientation of the health care model. In conclusion, the structuring and strengthening of a network of care for users of alcohol and other drugs and their families, based on community care and the harm reduction approach and combined with other social and health services, is now a key public health challenge for the country.
Towards Patient-Centric Telehealth: a Journey into ICT Infrastructures and User Modeling
DEFF Research Database (Denmark)
Jørgensen, Daniel Bjerring
behavior? Results: A draft of an ICT infrastructure, intended as a research platform for telehealth applications, has been implemented. Integration to the ICT infrastructure is easily achieved through an ontological framework, provided by an agent-based façade library. User modeling is employed to store...... the state of the art was conducted. Discussions are provided of the advantages and disadvantages of the Continua framework (the foundation of the ongoing national initiatives) and how context information can be used and included on the national level. Conclusions: The agent, ontology and user modeling...... paradigms were applied to successfully implement a flexible and open ICT infrastructure where telehealth applications can be tested. One such application is the prediction algorithm whose prediction ability is ~70% of a smart home resident’s next activity making it the state of the art in predicting ADL...
A Neural Network Approach to Intention Modeling for User-Adapted Conversational Agents
Directory of Open Access Journals (Sweden)
David Griol
2016-01-01
Full Text Available Spoken dialogue systems have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this contribution, we present a framework based on neural networks that allows modeling of the user’s intention during the dialogue and uses this prediction to dynamically adapt the dialogue model of the system taking into consideration the user’s needs and preferences. We have evaluated our proposal to develop a user-adapted spoken dialogue system that facilitates tourist information and services and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, and the quality perceived by the users.
Streamflow forecasting using the modular modeling system and an object-user interface
Jeton, A.E.
2001-01-01
The U.S. Geological Survey (USGS), in cooperation with the Bureau of Reclamation (BOR), developed a computer program to provide a general framework needed to couple disparate environmental resource models and to manage the necessary data. The Object-User Interface (OUI) is a map-based interface for models and modeling data. It provides a common interface to run hydrologic models and acquire, browse, organize, and select spatial and temporal data. One application is to assist river managers in utilizing streamflow forecasts generated with the Precipitation-Runoff Modeling System running in the Modular Modeling System (MMS), a distributed-parameter watershed model, and the National Weather Service Extended Streamflow Prediction (ESP) methodology.
Degenerate odd Poisson bracket on Grassmann variables
International Nuclear Information System (INIS)
Soroka, V.A.
2000-01-01
A linear degenerate odd Poisson bracket (antibracket) realized solely on Grassmann variables is proposed. It is revealed that this bracket has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, second and third orders with respect to the Grassmann derivatives. It is shown that these Δ-like operators, together with the Grassmann-odd nilpotent Casimir function of this bracket, form a finite-dimensional Lie superalgebra
Poisson/Superfish codes for personal computers
International Nuclear Information System (INIS)
Humphries, S.
1992-01-01
The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs
Elementary derivation of Poisson structures for fluid dynamics and electrodynamics
International Nuclear Information System (INIS)
Kaufman, A.N.
1982-01-01
The canonical Poisson structure of the microscopic Lagrangian is used to deduce the noncanonical Poisson structure for the macroscopic Hamiltonian dynamics of a compressible neutral fluid and of fluid electrodynamics
Does the Model of Evaluation Based on Fair Value Answer the Requests of Financial Information Users?
Mitea Neluta; Sarac Aldea Laura
2010-01-01
Does the model of evaluation based on the fair value answers the requests of the financial information users? The financial situations have as purposes the presentation of the information concerning the enterprise financial position, the performances and modifications of this position which, according to IASB and FASB, must be credible and useful. Both referential maintain the existence of several conventions regarding assessment, like historical cost, actual cost, the realizable value or act...
User Modeling and Planning for Improving Self-efficacy and Goal Adherence in mHealth
Directory of Open Access Journals (Sweden)
Peter Pirolli
2015-10-01
Our user modeling efforts have been primarily focused on refining predictions of psychosocial changes that underlie the achievement of behavior-change goals. We have chosen to use the ACT-R neurocognitive theory and simulation environment as a way of driving the user modeling efforts because we believe that: (a neurocognitive architectures provide a unified account of how the modules of the mind function together to produce coherent behavior, and provide an integrative explanation of data produced across specialized domains of psychology; (b longer-term behavior change occurring over days, weeks, or months, can be decomposed to learning events occupying much briefer units of time; and (c models in neurocognitive architectures provide a basis for bridging the events at the small scale to the dynamics of behavior change occurring at the large scale. Our planner is an adaptive, computational approach to the problem of interactive planning for health-behavior change. The approach ensures that the generated plan is tailored to each individual user and is reactive to their experience with it. This can lead to better compliance and long-term goal achievement.
Reduction of Nambu-Poisson Manifolds by Regular Distributions
Das, Apurba
2018-03-01
The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.
LIANA Model Integration System - architecture, user interface design and application in MOIRA DSS
Directory of Open Access Journals (Sweden)
D. Hofman
2005-01-01
Full Text Available The LIANA Model Integration System is the shell application supporting model integration and user interface functionality required for the rapid construction and run-time support of the environmental decision support systems (EDSS. Internally it is constructed as the framework of C++ classes and functions covering most common tasks performed by the EDSS (such as managing of and alternative strategies, running of the chain of the models, supporting visualisation of the data with tables and graphs, keeping ranges and default values for input parameters etc.. EDSS is constructed by integration of LIANA system with the models or other applications such as GIS or MAA software. The basic requirements to the model or other application to be integrated is minimal - it should be a Windows or DOS .exe file and receive input and provide output as text files. For the user the EDSS is represented as the number of data sets describing scenario or giving results of evaluation of scenario via modelling. Internally data sets correspond to the I/O files of the models. During the integration the parameters included in each the data sets as well as specifications necessary to present the data set in GUI and export or import it to/from text file are provided with MIL_LIANA language. Visual C++ version of LIANA has been developed in the frame of MOIRA project and is used as the basis for the MOIRA Software Framework - the shell and user interface component of the MOIRA Decision Support System. At present, the usage of LIANA for the creation of a new EDSS requires changes to be made in its C++ code. The possibility to use LIANA for the new EDSS construction without extending the source code is achieved by substituting MIL_LIANA with the object-oriented LIANA language.
Yuan, Shupei; Ma, Wenjuan; Kanthawala, Shaheen; Peng, Wei
2015-09-01
Health and fitness applications (apps) are one of the major app categories in the current mobile app market. Few studies have examined this area from the users' perspective. This study adopted the Extended Unified Theory of Acceptance and Use of Technology (UTAUT2) Model to examine the predictors of the users' intention to adopt health and fitness apps. A survey (n=317) was conducted with college-aged smartphone users at a Midwestern university in the United States. Performance expectancy, hedonic motivations, price value, and habit were significant predictors of users' intention of continued usage of health and fitness apps. However, effort expectancy, social influence, and facilitating conditions were not found to predict users' intention of continued usage of health and fitness apps. This study extends the UTATU2 Model to the mobile apps domain and provides health professions, app designers, and marketers with the insights of user experience in terms of continuously using health and fitness apps.
2001 Joint ADVISOR/PSAT Vehicle Systems Modeling User's Conference Proceedings (CD)
International Nuclear Information System (INIS)
Markel, T.
2001-01-01
The 2001 Joint ADVISOR/PSAT Vehicle Systems Modeling User Conference provided an opportunity for engineers in the automotive industry and the research environment to share their experiences in vehicle systems modeling using ADVISOR and PSAT. ADVISOR and PSAT are vehicle systems modeling tools developed and supported by the National Renewable Energy Laboratory and Argonne National Laboratory respectively with the financial support of the US Department of Energy. During this conference peers presented the results of studies using the simulation tools and improvements that they have made or would like to see in the simulation tools. Focus areas of the presentations included Control Strategy, Model Validation, Optimization and Co-Simulation, Model Development, Applications, and Fuel Cell Vehicle Systems Analysis. Attendees were offered the opportunity to give feedback on future model development plans
2001 Joint ADVISOR/PSAT Vehicle Systems Modeling User's Conference Proceedings (CD)
Energy Technology Data Exchange (ETDEWEB)
Markel, T.
2001-08-01
The 2001 Joint ADVISOR/PSAT Vehicle Systems Modeling User Conference provided an opportunity for engineers in the automotive industry and the research environment to share their experiences in vehicle systems modeling using ADVISOR and PSAT. ADVISOR and PSAT are vehicle systems modeling tools developed and supported by the National Renewable Energy Laboratory and Argonne National Laboratory respectively with the financial support of the US Department of Energy. During this conference peers presented the results of studies using the simulation tools and improvements that they have made or would like to see in the simulation tools. Focus areas of the presentations included Control Strategy, Model Validation, Optimization and Co-Simulation, Model Development, Applications, and Fuel Cell Vehicle Systems Analysis. Attendees were offered the opportunity to give feedback on future model development plans.
Integrating Model of the Project Independence Evaluation System. Volume III. User's Guide
Energy Technology Data Exchange (ETDEWEB)
Shaw, M.L.; Hutzler, M.J.
1979-03-01
Volume III of the six-volume series documenting the Integrating Model of PIES provides a potential PIES user with a description of how PIES operates with particular emphasis on the possible variations in assumptions and data that can be made in specifying alternative scenarios. PIES is described as it existed on January 1, 1978. The introductory chapter is followed by Section II, an overview of the structure and components of PIES. Section III discusses each of the PIES components in detail; describes the Demand Model; contains a description of the models, assumptions, and data which provide supply side inputs to the PIES Integrating Model; and concludes with a discussion of those aspects of PIES which extend the scope of the analysis beyond the national energy market. Section IV discusses two reports produced by the PIES Integrating Model: the PIES Integrating Model Report and the Coal Transportation Report. (MCW)
Team behaviour analysis in sports using the poisson equation
Direkoglu, Cem; O'Connor, Noel E.
2012-01-01
We propose a novel physics-based model for analysing team play- ers’ positions and movements on a sports playing field. The goal is to detect for each frame the region with the highest population of a given team’s players and the region towards which the team is moving as they press for territorial advancement, termed the region of intent. Given the positions of team players from a plan view of the playing field at any given time, we solve a particular Poisson equation to generate a smooth di...
An approach to numerically solving the Poisson equation
Feng, Zhichen; Sheng, Zheng-Mao
2015-06-01
We introduce an approach for numerically solving the Poisson equation by using a physical model, which is a way to solve a partial differential equation without the finite difference method. This method is especially useful for obtaining the solutions in very many free-charge neutral systems with open boundary conditions. It can be used for arbitrary geometry and mesh style and is more efficient comparing with the widely-used iterative algorithm with multigrid methods. It is especially suitable for parallel computing. This method can also be applied to numerically solving other partial differential equations whose Green functions exist in analytic expression.
Large Time Behavior of the Vlasov-Poisson-Boltzmann System
Directory of Open Access Journals (Sweden)
Li Li
2013-01-01
Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.
Improving EWMA Plans for Detecting Unusual Increases in Poisson Counts
Directory of Open Access Journals (Sweden)
R. S. Sparks
2009-01-01
adaptive exponentially weighted moving average (EWMA plan is developed for signalling unusually high incidence when monitoring a time series of nonhomogeneous daily disease counts. A Poisson transitional regression model is used to fit background/expected trend in counts and provides “one-day-ahead” forecasts of the next day's count. Departures of counts from their forecasts are monitored. The paper outlines an approach for improving early outbreak data signals by dynamically adjusting the exponential weights to be efficient at signalling local persistent high side changes. We emphasise outbreak signals in steady-state situations; that is, changes that occur after the EWMA statistic had run through several in-control counts.
AbdelNabi, Amr A.
2018-02-12
This paper presents new approaches to characterize the achieved performance of hybrid control-access small cells in the context of two-tier multi-input multi-output (MIMO) cellular networks with random interference distributions. The hybrid scheme at small cells (such as femtocells) allows for sharing radio resources between the two network tiers according to the densities of small cells and their associated users, as well as the observed interference power levels in the two network tiers. The analysis considers MIMO transceivers at all nodes, for which antenna arrays can be utilized to implement transmit antenna selection (TAS) and receive maximal ratio combining (MRC) under MIMO point-to-point channels. Moreover, it tar-gets network-level models of interference sources inside each tier and between the two tiers, which are assumed to follow Poisson field processes. To fully capture the occasions for Poisson field distribution on MIMO spatial domain. Two practical scenarios of interference sources are addressed including highly-correlated or uncorrelated transmit antenna arrays of the serving macrocell base station. The analysis presents new analytical approaches that can characterize the downlink outage probability performance in any tier. Furthermore, the outage performance in high signal-to-noise (SNR) regime is also obtained, which can be useful to deduce diversity and/or coding gains.
Chaos in a dynamic model of urban transportation network flow based on user equilibrium states
International Nuclear Information System (INIS)
Xu Meng; Gao Ziyou
2009-01-01
In this study, we investigate the dynamical behavior of network traffic flow. We first build a two-stage mathematical model to analyze the complex behavior of network flow, a dynamical model, which is based on the dynamical gravity model proposed by Dendrinos and Sonis [Dendrinos DS, Sonis M. Chaos and social-spatial dynamic. Berlin: Springer-Verlag; 1990] is used to estimate the number of trips. Considering the fact that the Origin-Destination (O-D) trip cost in the traffic network is hard to express as a functional form, in the second stage, the user equilibrium network assignment model was used to estimate the trip cost, which is the minimum cost of used path when user equilibrium (UE) conditions are satisfied. It is important to use UE to estimate the O-D cost, since a connection is built among link flow, path flow, and O-D flow. The dynamical model describes the variations of O-D flows over discrete time periods, such as each day and each week. It is shown that even in a system with dimensions equal to two, chaos phenomenon still exists. A 'Chaos Propagation' phenomenon is found in the given model.
Mars Global Reference Atmospheric Model 2000 Version (Mars-GRAM 2000): Users Guide
Justus, C. G.; James, B. F.
2000-01-01
This report presents Mars Global Reference Atmospheric Model 2000 Version (Mars-GRAM 2000) and its new features. All parameterizations for temperature, pressure, density, and winds versus height, latitude, longitude, time of day, and L(sub s) have been replaced by input data tables from NASA Ames Mars General Circulation Model (MGCM) for the surface through 80-km altitude and the University of Arizona Mars Thermospheric General Circulation Model (MTGCM) for 80 to 170 km. A modified Stewart thermospheric model is still used for higher altitudes and for dependence on solar activity. "Climate factors" to tune for agreement with GCM data are no longer needed. Adjustment of exospheric temperature is still an option. Consistent with observations from Mars Global Surveyor, a new longitude-dependent wave model is included with user input to specify waves having 1 to 3 wavelengths around the planet. A simplified perturbation model has been substituted for the earlier one. An input switch allows users to select either East or West longitude positive. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and for running the program. It also provides sample input and output and an example for incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code.
Bjertnaes, Oyvind Andresen; Iversen, Hilde Hestad
2012-08-01
To compare two ways of combining postal and electronic data collection for a maternity services user-experience survey. Cross-sectional survey. Maternity services in Norway. All women who gave birth at a university hospital in Norway between 1 June and 27 July 2010. Patients were randomized into the following groups (n= 752): Group A, who were posted questionnaires with both electronic and paper response options for both the initial and reminder postal requests; and Group B, who were posted questionnaires with an electronic response option for the initial request, and both electronic and paper response options for the reminder postal request. Response rate, the amount of difference in background variables between respondents and non-respondents, main study results and estimated cost-effectiveness. The final response rate was significantly higher in Group A (51.9%) than Group B (41.1%). None of the background variables differed significantly between the respondents and non-respondents in Group A, while two variables differed significantly between the respondents and non-respondents in Group B. None of the 11 user-experience scales differed significantly between Groups A and B. The estimated costs per response for the forthcoming national survey was €11.7 for data collection Model A and €9.0 for Model B. The model with electronic-only response option in the first request had lowest response rate. However, this model performed equal to the other model on non-response bias and better on estimated cost-effectiveness, and is the better of the two models in large-scale user experiences surveys with maternity services.
User Guide for VISION 3.4.7 (Verifiable Fuel Cycle Simulation) Model
Energy Technology Data Exchange (ETDEWEB)
Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern; Steven J. Piet; Wendell D. Hintze
2011-07-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters and options; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation or disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. You must use Powersim Studio 8 or better. We have tested VISION with the Studio 8 Expert, Executive, and Education versions
User-driven Cloud Implementation of environmental models and data for all
Gurney, R. J.; Percy, B. J.; Elkhatib, Y.; Blair, G. S.
2014-12-01
Environmental data and models come from disparate sources over a variety of geographical and temporal scales with different resolutions and data standards, often including terabytes of data and model simulations. Unfortunately, these data and models tend to remain solely within the custody of the private and public organisations which create the data, and the scientists who build models and generate results. Although many models and datasets are theoretically available to others, the lack of ease of access tends to keep them out of reach of many. We have developed an intuitive web-based tool that utilises environmental models and datasets located in a cloud to produce results that are appropriate to the user. Storyboards showing the interfaces and visualisations have been created for each of several exemplars. A library of virtual machine images has been prepared to serve these exemplars. Each virtual machine image has been tailored to run computer models appropriate to the end user. Two approaches have been used; first as RESTful web services conforming to the Open Geospatial Consortium (OGC) Web Processing Service (WPS) interface standard using the Python-based PyWPS; second, a MySQL database interrogated using PHP code. In all cases, the web client sends the server an HTTP GET request to execute the process with a number of parameter values and, once execution terminates, an XML or JSON response is sent back and parsed at the client side to extract the results. All web services are stateless, i.e. application state is not maintained by the server, reducing its operational overheads and simplifying infrastructure management tasks such as load balancing and failure recovery. A hybrid cloud solution has been used with models and data sited on both private and public clouds. The storyboards have been transformed into intuitive web interfaces at the client side using HTML, CSS and JavaScript, utilising plug-ins such as jQuery and Flot (for graphics), and Google Maps
An Enhanced Dynamic User Optimal Passenger Flow Assignment Model for Metro Networks
Directory of Open Access Journals (Sweden)
Hanchuan Pan
2017-01-01
Full Text Available By considering the difference between a car driver’s route choice behavior on the road and a passenger’s route choice behavior in urban rail transit (URT, this paper proposes an enhanced Dynamic User Optimal (DUO passenger flow assignment model for metro networks. To capture realistic URT phenomena, the model has integrated the train operation disturbance constraint. Real passenger and train data are used to verify the proposed model and algorithm. The results indicate that the DUO-based model is more suitable for describing passenger route choice behavior under uncertain conditions compared to a static model. Moreover, this paper found that passengers under oversaturated conditions are more sensitive to train operation disturbances compared to undersaturated passengers.
Requirements for UML and OWL Integration Tool for User Data Consistency Modeling and Testing
DEFF Research Database (Denmark)
Nytun, J. P.; Jensen, Christian Søndergaard; Oleshchuk, V. A.
2003-01-01
The amount of data available on the Internet is continuously increasing, consequentially there is a growing need for tools that help to analyse the data. Testing of consistency among data received from different sources is made difficult by the number of different languages and schemas being used....... In this paper we analyze requirements for a tool that support integration of UML models and ontologies written in languages like the W3C Web Ontology Language (OWL). The tool can be used in the following way: after loading two legacy models into the tool, the tool user connects them by inserting modeling......, an important part of this technique is attaching of OCL expressions to special boolean class attributes that we call consistency attributes. The resulting integration model can be used for automatic consistency testing of two instances of the legacy models by automatically instantiate the whole integration...
Sparsity-based Poisson denoising with dictionary learning.
Giryes, Raja; Elad, Michael
2014-12-01
The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.
Lake Representations in Global Climate Models: An End-User Perspective
Rood, R. B.; Briley, L.; Steiner, A.; Wells, K.
2017-12-01
The weather and climate in the Great Lakes region of the United States and Canada are strongly influenced by the lakes. Within global climate models, lakes are incorporated in many ways. If one is interested in quantitative climate information for the Great Lakes, then it is a first principle requirement that end-users of climate model simulation data, whether scientists or practitioners, need to know if and how lakes are incorporated into models. We pose the basic question, how are lakes represented in CMIP models? Despite significant efforts by the climate community to document and publish basic information about climate models, it is unclear how to answer the question about lake representations? With significant knowledge of the practice of the field, then a reasonable starting point is to use the ES-DOC Comparator (https://compare.es-doc.org/ ). Once at this interface to model information, the end-user is faced with the need for more knowledge about the practice and culture of the discipline. For example, lakes are often categorized as a type of land, a counterintuitive concept. In some models, though, lakes are specified in ocean models. There is little evidence and little confidence that the information obtained through this process is complete or accurate. In fact, it is verifiably not accurate. This experience, then, motivates identifying and finding either human experts or technical documentation for each model. The conclusion from this exercise is that it can take months or longer to provide a defensible answer to if and how lakes are represented in climate models. Our experience with lake finding is that this is not a unique experience. This talk documents our experience and explores barriers we have identified and strategies for reducing those barriers.
Directory of Open Access Journals (Sweden)
Milad Haghani
2016-06-01
Further investigations with respect to the relative importance of STA model estimation (or equivalently, parameter calibration and model specification (or equivalently, error term formulation are also conducted. A paired combinatorial logit (PCL assignment model with an origin–destination-specific-parameter, along with a heuristic method of model estimation (calibration, is proposed. The proposed model cannot only accommodate the correlation between path utilities, but also accounts for the fact that travelling between different origin–destination (O–D pairs can correspond to different levels of stochasticity and choice randomness. Results suggest that the estimation of the stochastic user equilibrium (SUE models can affect the outcome of the flow prediction far more meaningfully than the complexity of the choice model (i.e., model specification.
Akman, Ibrahim; Turhan, Cigdem
2017-01-01
This study aims to explore the users' behaviour and acceptance of social media for learning in higher educational institutions with the help of the extended Technology Acceptance Model (TAM). TAM has been extended to investigate how ethical and security awareness of users affect the actual usage of social learning applications. For this purpose, a…
K. Schwarz
2004-01-01
textabstractThis thesis investigates a possible solution to adapting an automatically generated presentation to an anonymous user. We will explore the field of User Modeling, specifically Adaptive Hypermedia, to find suitable methods. In our case study, we combine the methods we find to develop a
Directory of Open Access Journals (Sweden)
Ibrahim Delibalta
2017-01-01
Full Text Available We provide a causal inference framework to model the effects of machine learning algorithms on user preferences. We then use this mathematical model to prove that the overall system can be tuned to alter those preferences in a desired manner. A user can be an online shopper or a social media user, exposed to digital interventions produced by machine learning algorithms. A user preference can be anything from inclination towards a product to a political party affiliation. Our framework uses a state-space model to represent user preferences as latent system parameters which can only be observed indirectly via online user actions such as a purchase activity or social media status updates, shares, blogs, or tweets. Based on these observations, machine learning algorithms produce digital interventions such as targeted advertisements or tweets. We model the effects of these interventions through a causal feedback loop, which alters the corresponding preferences of the user. We then introduce algorithms in order to estimate and later tune the user preferences to a particular desired form. We demonstrate the effectiveness of our algorithms through experiments in different scenarios.
CO{sub 2}MPARE. CO2 Model for Operational Programme Assessment in EU Regions. User Tutorial
Energy Technology Data Exchange (ETDEWEB)
Hekkenberg, M. [ECN Policy Studies, Amsterdam (Netherlands); Vincent-Genod, C. [Energies Demain, Montreuil Sous Bois (France); Regina, P. [Italian National Agency for New Technologies, Energy and Sustainable Economic Development ENEA, Rome (Italy); Keppo, I. [University College London UCL, London (United Kingdom); Papagianni, S. [Centre for Renewable Energy Sources and Saving CRES, Pikermi Attiki (Greece); Harnych, J. [ENVIROS, Prague (Czech Republic)
2013-03-15
The CO2MPARE model supports national and regional authorities in making balanced decisions for their investment portfolio under their regional development programmes, in particular under their Operational Programmes of EU Regional Policy. This document is a tutorial for users of the CO2MPARE model and provides step by step guidance on the different functionalities of the model for both basic and expert users.
User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU
Energy Technology Data Exchange (ETDEWEB)
Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.
1981-11-01
MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.
MobRISK: a model for assessing the exposure of road users to flash flood events
Directory of Open Access Journals (Sweden)
S. Shabou
2017-09-01
Full Text Available Recent flash flood impact studies highlight that road networks are often disrupted due to adverse weather and flash flood events. Road users are thus particularly exposed to road flooding during their daily mobility. Previous exposure studies, however, do not take into consideration population mobility. Recent advances in transportation research provide an appropriate framework for simulating individual travel-activity patterns using an activity-based approach. These activity-based mobility models enable the prediction of the sequence of activities performed by individuals and locating them with a high spatial–temporal resolution. This paper describes the development of the MobRISK microsimulation system: a model for assessing the exposure of road users to extreme hydrometeorological events. MobRISK aims at providing an accurate spatiotemporal exposure assessment by integrating travel-activity behaviors and mobility adaptation with respect to weather disruptions. The model is applied in a flash-flood-prone area in southern France to assess motorists' exposure to the September 2002 flash flood event. The results show that risk of flooding mainly occurs in principal road links with considerable traffic load. However, a lag time between the timing of the road submersion and persons crossing these roads contributes to reducing the potential vehicle-related fatal accidents. It is also found that sociodemographic variables have a significant effect on individual exposure. Thus, the proposed model demonstrates the benefits of considering spatiotemporal dynamics of population exposure to flash floods and presents an important improvement in exposure assessment methods. Such improved characterization of road user exposures can present valuable information for flood risk management services.
MobRISK: a model for assessing the exposure of road users to flash flood events
Shabou, Saif; Ruin, Isabelle; Lutoff, Céline; Debionne, Samuel; Anquetin, Sandrine; Creutin, Jean-Dominique; Beaufils, Xavier
2017-09-01
Recent flash flood impact studies highlight that road networks are often disrupted due to adverse weather and flash flood events. Road users are thus particularly exposed to road flooding during their daily mobility. Previous exposure studies, however, do not take into consideration population mobility. Recent advances in transportation research provide an appropriate framework for simulating individual travel-activity patterns using an activity-based approach. These activity-based mobility models enable the prediction of the sequence of activities performed by individuals and locating them with a high spatial-temporal resolution. This paper describes the development of the MobRISK microsimulation system: a model for assessing the exposure of road users to extreme hydrometeorological events. MobRISK aims at providing an accurate spatiotemporal exposure assessment by integrating travel-activity behaviors and mobility adaptation with respect to weather disruptions. The model is applied in a flash-flood-prone area in southern France to assess motorists' exposure to the September 2002 flash flood event. The results show that risk of flooding mainly occurs in principal road links with considerable traffic load. However, a lag time between the timing of the road submersion and persons crossing these roads contributes to reducing the potential vehicle-related fatal accidents. It is also found that sociodemographic variables have a significant effect on individual exposure. Thus, the proposed model demonstrates the benefits of considering spatiotemporal dynamics of population exposure to flash floods and presents an important improvement in exposure assessment methods. Such improved characterization of road user exposures can present valuable information for flood risk management services.
User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment
Directory of Open Access Journals (Sweden)
Zhe Zhang
2015-01-01
Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.
An evolving user-oriented model of Internet health information seeking.
Gaie, Martha J
2006-01-01
This paper presents an evolving user-oriented model of Internet health information seeking (IS) based on qualitative data collected from 22 lung cancer (LC) patients and caregivers. This evolving model represents information search behavior as more highly individualized, complex, and dynamic than previous models, including pre-search psychological activity, use of multiple heuristics throughout the process, and cost-benefit evaluation of search results. This study's findings suggest that IS occurs in four distinct phases: search initiation/continuation, selective exposure, message processing, and message evaluation. The identification of these phases and the heuristics used within them suggests a higher order of complexity in the decision-making processes that underlie IS, which could lead to the development of a conceptual framework that more closely reflects the complex nature of contextualized IS. It also illustrates the advantages of using qualitative methods to extract more subtle details of the IS process and fill in the gaps in existing models.
International Nuclear Information System (INIS)
Uyterlinde, M.A.; Rijkers, F.A.M.
1999-12-01
The main objective of the energy conservation model REDUCE (Reduction of Energy Demand by Utilization of Conservation of Energy) is the evaluation of the effectiveness of economical, financial, institutional, and regulatory measures for improving the rational use of energy in end-use sectors. This report presents the results of additional model development activities, partly based on the first experiences in a previous project. Energy efficiency indicators have been added as an extra tool for output analysis in REDUCE. The methodology is described and some examples are given. The model has been extended with a method for modelling the effects of technical development on production costs, by means of an experience curve. Finally, the report provides a 'users guide', by describing in more detail the input data specification as well as all menus and buttons. 19 refs
Visual imagery and the user model applied to fuel handling at EBR-II
International Nuclear Information System (INIS)
Brown-VanHoozer, S.A.
1995-01-01
The material presented in this paper is based on two studies involving visual display designs and the user's perspective model of a system. The studies involved a methodology known as Neuro-Linguistic Programming (NLP), and its use in expanding design choices which included the ''comfort parameters'' and ''perspective reality'' of the user's model of the world. In developing visual displays for the EBR-II fuel handling system, the focus would be to incorporate the comfort parameters that overlap from each of the representation systems: visual, auditory and kinesthetic then incorporate the comfort parameters of the most prominent group of the population, and last, blend in the other two representational system comfort parameters. The focus of this informal study was to use the techniques of meta-modeling and synesthesia to develop a virtual environment that closely resembled the operator's perspective of the fuel handling system of Argonne's Experimental Breeder Reactor - II. An informal study was conducted using NLP as the behavioral model in a v reality (VR) setting
Algebraic properties of compatible Poisson brackets
Zhang, Pumei
2014-05-01
We discuss algebraic properties of a pencil generated by two compatible Poisson tensors A( x) and B( x). From the algebraic viewpoint this amounts to studying the properties of a pair of skew-symmetric bilinear forms A and B defined on a finite-dimensional vector space. We describe the Lie group G P of linear automorphisms of the pencil P = { A + λB}. In particular, we obtain an explicit formula for the dimension of G P and discuss some other algebraic properties such as solvability and Levi-Malcev decomposition.
Lee, S. S.; Nwadike, E. V.; Sinha, S. E.
1982-01-01
The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.
Evaluating Author and User Experience for an Audio-Haptic System for Annotation of Physical Models.
Coughlan, James M; Miele, Joshua
2017-01-01
We describe three usability studies involving a prototype system for creation and haptic exploration of labeled locations on 3D objects. The system uses a computer, webcam, and fiducial markers to associate a physical 3D object in the camera's view with a predefined digital map of labeled locations ("hotspots"), and to do real-time finger tracking, allowing a blind or visually impaired user to explore the object and hear individual labels spoken as each hotspot is touched. This paper describes: (a) a formative study with blind users exploring pre-annotated objects to assess system usability and accuracy; (b) a focus group of blind participants who used the system and, through structured and unstructured discussion, provided feedback on its practicality, possible applications, and real-world potential; and (c) a formative study in which a sighted adult used the system to add labels to on-screen images of objects, demonstrating the practicality of remote annotation of 3D models. These studies and related literature suggest potential for future iterations of the system to benefit blind and visually impaired users in educational, professional, and recreational contexts.
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2015-12-01
We developed user-friendly empirical models to predict instantaneous fluxes of CO2 and CH4 from coastal wetlands based on a small set of dominant hydro-climatic and environmental drivers (e.g., photosynthetically active radiation, soil temperature, water depth, and soil salinity). The dominant predictor variables were systematically identified by applying a robust data-analytics framework on a wide range of possible environmental variables driving wetland greenhouse gas (GHG) fluxes. The method comprised of a multi-layered data-analytics framework, including Pearson correlation analysis, explanatory principal component and factor analyses, and partial least squares regression modeling. The identified dominant predictors were finally utilized to develop power-law based non-linear regression models to predict CO2 and CH4 fluxes under different climatic, land use (nitrogen gradient), tidal hydrology and salinity conditions. Four different tidal wetlands of Waquoit Bay, MA were considered as the case study sites to identify the dominant drivers and evaluate model performance. The study sites were dominated by native Spartina Alterniflora and characterized by frequent flooding and high saline conditions. The model estimated the potential net ecosystem carbon balance (NECB) both in gC/m2 and metric tonC/hectare by up-scaling the instantaneous predicted fluxes to the growing season and accounting for the lateral C flux exchanges between the wetlands and estuary. The entire model was presented in a single Excel spreadsheet as a user-friendly ecological engineering tool. The model can aid the development of appropriate GHG offset protocols for setting monitoring plans for tidal wetland restoration and maintenance projects. The model can also be used to estimate wetland GHG fluxes and potential carbon storage under various IPCC climate change and sea level rise scenarios; facilitating an appropriate management of carbon stocks in tidal wetlands and their incorporation into a
Documentation, User Support, and Verification of Wind Turbine and Plant Models
Energy Technology Data Exchange (ETDEWEB)
Robert Zavadil; Vadim Zheglov; Yuriy Kazachkov; Bo Gong; Juan Sanchez; Jun Li
2012-09-18
As part of the Utility Wind Energy Integration Group (UWIG) and EnerNex's Wind Turbine Modeling Project, EnerNex has received ARRA (federal stimulus) funding through the Department of Energy (DOE) to further the progress of wind turbine and wind plant models. Despite the large existing and planned wind generation deployment, industry-standard models for wind generation have not been formally adopted. Models commonly provided for interconnection studies are not adequate for use in general transmission planning studies, where public, non-proprietary, documented and validated models are needed. NERC MOD (North American Electric Reliability Corporation) reliability standards require that power flow and dynamics models be provided, in accordance with regional requirements and procedures. The goal of this project is to accelerate the appropriate use of generic wind turbine models for transmission network analysis by: (1) Defining proposed enhancements to the generic wind turbine model structures that would allow representation of more advanced; (2) Comparative testing of the generic models against more detailed (and sometimes proprietary) versions developed by turbine vendors; (3) Developing recommended parameters for the generic models to best mimic the performance of specific commercial wind turbines; (4) Documenting results of the comparative simulations in an application guide for users; (5) Conducting technology transfer activities in regional workshops for dissemination of knowledge and information gained, and to engage electric power and wind industry personnel in the project while underway; (6) Designing of a "living" homepage to establish an online resource for transmission planners.
International Nuclear Information System (INIS)
Brown-VanHoozer, S.A.
1995-01-01
Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, to minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user's processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user's perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user's ''model of the world,'' in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more
International Nuclear Information System (INIS)
Saltelli, A.; Homma, T.
1992-01-01
This manual is subdivided into three parts. In the third part, the SPOP (Statistical POst Processor) code is described as a tool to perform Uncertainty and Sensitivity Analyses on the output of a User implemented model. It has been developed at the joint Research Centre of Ispra as part of the LISA package. SPOP performs Sensitivity Analysis (SA) and Uncertainty Analysis (UA) on a sample output from a Monte Carlo simulation. The sample is generated by the User and contains values of the output variable (in the form of a time series) and values of the input variables for a set of different simulations (runs), which are realised by varying the model input parameters. The User may generate the Monte Carlo sample with the PREP pre-processor, another module of the LISA package. The SPOP code is completely written in FORTRAN 77 using structured programming. Among the tasks performed by the code are the computation of Tchebycheff and Kolmogorov confidence bounds on the output variable (UA), and the use of effective non-parametric statistics to rank the influence of model input parameters (SA). The statistics employed are described in the present manual. 19 refs., 16 figs., 2 tabs. Note: This PART III is a revised version of the previous EUR report N.12700EN (1990)
Directory of Open Access Journals (Sweden)
Toktam Balandeh
2016-04-01
Full Text Available Background: Anthropometry is a branch of Ergonomics that considers the measurement and description of the human body dimensions. Accordingly, equipment, environments, and workstations should be designed using user-centered design processes. Anthropometric dimensions differ considerably across gender, race, ethnicity and age, taking into account ergonomic and anthropometric principles. The aim of this study was to determine anthropometric characteristics of microscope users and provide a regression model for anthropometric dimensions. Methods: In this cross-sectional study, anthropometric dimensions (18 dimensions of the microscope users (N=174; 78 males and 96 females in Shiraz were measured. Instruments included a Studio meter, 2 type calipers, adjustable seats, a 40-cm ruler, a tape measure, and scales. The study data were analyzed using SPSS, version 20. Results: The means of male and female microscope users’ age were 31.64±8.86 and 35±10.9 years, respectively and their height were 161.03±6.87cm and 174.81±5.45cm, respectively. The results showed that sitting and standing eye height and sitting horizontal range of accessibility had a significant correlation with stature. Conclusion: The established anthropometric database can be used as a source for designing workstations for working with microscopes in this group of users. The regression analysis showed that three dimensions, i.e. standing eye height, sitting eye height, and horizontal range of accessibility sitting had a significant correlation with stature. Therefore, given one’s stature, these dimensions can be obtained with less measurement.
A user-friendly model for spray drying to aid pharmaceutical product development.
Directory of Open Access Journals (Sweden)
Niels Grasmeijer
Full Text Available The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.
A user-friendly model for spray drying to aid pharmaceutical product development.
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.
User modeling and adaptation for daily routines providing assistance to people with special needs
Martín, Estefanía; Carro, Rosa M
2013-01-01
User Modeling and Adaptation for Daily Routines is motivated by the need to bring attention to how people with special needs can benefit from adaptive methods and techniques in their everyday lives. Assistive technologies, adaptive systems and context-aware applications are three well-established research fields. There is, in fact, a vast amount of literature that covers HCI-related issues in each area separately. However, the contributions in the intersection of these areas have been less visible, despite the fact that such synergies may have a great impact on improving daily living.Presentin