WorldWideScience

Sample records for random utility models

  1. Hedonic travel cost and random utility models of recreation

    Energy Technology Data Exchange (ETDEWEB)

    Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  2. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    Science.gov (United States)

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  4. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation

    NARCIS (Netherlands)

    B.M. Craig (Benjamin); J.J. van Busschbach (Jan)

    2009-01-01

    textabstractABSTRACT: BACKGROUND: To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. METHODS: First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common

  5. Estimating safety effects of pavement management factors utilizing Bayesian random effect models.

    Science.gov (United States)

    Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong

    2013-01-01

    Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic

  6. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  7. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation.

    Science.gov (United States)

    Craig, Benjamin M; Busschbach, Jan Jv

    2009-01-13

    To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single

  8. Factors Affecting Farmers’ Decision to Enter Agricultural Cooperatives Using Random Utility Model in the South Eastern Anatolian Region of Turkey

    Directory of Open Access Journals (Sweden)

    Bahri Karlı

    2006-10-01

    Full Text Available Farmers’ decision and perceptions to be a member of agricultural cooperatives in the South Eastern Anatolian Region were investigated. Factors affecting the probability of joining the agricultural cooperatives were determined using binary logit model. The model released that most of variables such as education, high communication, log of gross income, farm size, medium and high technology variables play important roles in determining the probability of entrance. Small farmers are likely expected to join the agricultural cooperatives than the wealthier farmers are. Small farmers may wish to benefit cash at hand, input subsidies, and services provided by the agricultural cooperatives since the risks associated with intensive high-returning crops are high. Some important factors playing pole role in abstention of farmers towards agricultural cooperatives are gross income and some social status variables. In addition, conservative or orthodox farmers are less likely to join agricultural cooperatives than moderate farmers are. We also found that the direct government farm credit programs mainly should be objected to providing farmers to better access to capital markets and creating the opportunity to use with allocation of capital inputs via using modern technology.

  9. Random regression models

    African Journals Online (AJOL)

    zlukovi

    modelled as a quadratic regression, nested within parity. The previous lactation length was ... This proportion was mainly covered by linear and quadratic coefficients. Results suggest that RRM could .... The multiple trait models in scalar notation are presented by equations (1, 2), while equation. (3) represents the random ...

  10. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  11. Randomized Item Response Theory Models

    NARCIS (Netherlands)

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  12. The weighted random graph model

    Science.gov (United States)

    Garlaschelli, Diego

    2009-07-01

    We introduce the weighted random graph (WRG) model, which represents the weighted counterpart of the Erdos-Renyi random graph and provides fundamental insights into more complicated weighted networks. We find analytically that the WRG is characterized by a geometric weight distribution, a binomial degree distribution and a negative binomial strength distribution. We also characterize exactly the percolation phase transitions associated with edge removal and with the appearance of weighted subgraphs of any order and intensity. We find that even this completely null model displays a percolation behaviour similar to what is observed in real weighted networks, implying that edge removal cannot be used to detect community structure empirically. By contrast, the analysis of clustering successfully reveals different patterns between the WRG and real networks.

  13. Dynamic randomization and a randomization model for clinical trials data.

    Science.gov (United States)

    Kaiser, Lee D

    2012-12-20

    Randomization models are useful in supporting the validity of linear model analyses applied to data from a clinical trial that employed randomization via permuted blocks. Here, a randomization model for clinical trials data with arbitrary randomization methodology is developed, with treatment effect estimators and standard error estimators valid from a randomization perspective. A central limit theorem for the treatment effect estimator is also derived. As with permuted-blocks randomization, a typical linear model analysis provides results similar to the randomization model results when, roughly, unit effects display no pattern over time. A key requirement for the randomization inference is that the unconditional probability that any patient receives active treatment is constant across patients; when this probability condition is violated, the treatment effect estimator is biased from a randomization perspective. Most randomization methods for balanced, 1 to 1, treatment allocation satisfy this condition. However, many dynamic randomization methods for planned unbalanced treatment allocation, like 2 to 1, do not satisfy this constant probability condition, and these methods should be avoided. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Continuous utility factor in segregation models.

    Science.gov (United States)

    Roy, Parna; Sen, Parongama

    2016-02-01

    We consider the constrained Schelling model of social segregation in which the utility factor of agents strictly increases and nonlocal jumps of the agents are allowed. In the present study, the utility factor u is defined in a way such that it can take continuous values and depends on the tolerance threshold as well as the fraction of unlike neighbors. Two models are proposed: in model A the jump probability is determined by the sign of u only, which makes it equivalent to the discrete model. In model B the actual values of u are considered. Model A and model B are shown to differ drastically as far as segregation behavior and phase transitions are concerned. In model A, although segregation can be achieved, the cluster sizes are rather small. Also, a frozen state is obtained in which steady states comprise many unsatisfied agents. In model B, segregated states with much larger cluster sizes are obtained. The correlation function is calculated to show quantitatively that larger clusters occur in model B. Moreover for model B, no frozen states exist even for very low dilution and small tolerance parameter. This is in contrast to the unconstrained discrete model considered earlier where agents can move even when utility remains the same. In addition, we also consider a few other dynamical aspects which have not been studied in segregation models earlier.

  15. A Mixed Effects Randomized Item Response Model

    Science.gov (United States)

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  16. Utilizing Building Information Modelling in Construction Procurement

    OpenAIRE

    Hassinen, Mikko Henrik

    2017-01-01

    The goal of this Bachelor’s thesis was to identify potential uses of building information modelling (BIM) in construction procurement, and to generate guidelines to help in larger scale BIM utilization. Scientific articles and books were the foundation for the theoretical background for this work, together with the previously published Common BIM Requirements 2012 (COBIM2012) -guidelines for building information modelling. Various people responsible for procurement and cost estimation wer...

  17. Genetic parameters for various random regression models to describe the weight data of pigs

    NARCIS (Netherlands)

    Huisman, A.E.; Veerkamp, R.F.; Arendonk, van J.A.M.

    2002-01-01

    Various random regression models have been advocated for the fitting of covariance structures. It was suggested that a spline model would fit better to weight data than a random regression model that utilizes orthogonal polynomials. The objective of this study was to investigate which kind of random

  18. Genetic parameters for different random regression models to describe weight data of pigs

    NARCIS (Netherlands)

    Huisman, A.E.; Veerkamp, R.F.; Arendonk, van J.A.M.

    2001-01-01

    Various random regression models have been advocated for the fitting of covariance structures. It was suggested that a spline model would fit better to weight data than a random regression model that utilizes orthogonal polynomials. The objective of this study was to investigate which kind of random

  19. Cost-Utility of Bilateral Versus Unilateral Cochlear Implantation in Adults: A Randomized Controlled Trial

    NARCIS (Netherlands)

    Smulders, Y.E.; Zon, A. van; Stegeman, I.; Zanten, G.A.; Rinia, A.B.; Stokroos, R.J.; Free, R.H.; Maat, B.; Frijns, J.H.; Mylanus, E.A.M.; Huinck, W.J.; Topsakal, V.; Grolman, W.

    2016-01-01

    OBJECTIVE: To study the cost-utility of simultaneous bilateral cochlear implantation (CI) versus unilateral CI. STUDY DESIGN: Randomized controlled trial (RCT). SETTING: Five tertiary referral centers. PATIENTS: Thirty-eight postlingually deafened adults eligible for cochlear implantation.

  20. Cost-Utility of Bilateral Versus Unilateral Cochlear Implantation in Adults : A Randomized Controlled Trial

    NARCIS (Netherlands)

    Smulders, Yvette E; van Zon, Alice; Stegeman, Inge; van Zanten, Gijsbert A; Rinia, Albert B; Stokroos, Robert J; Free, Rolien H; Maat, Bert; Frijns, Johan H M; Mylanus, Emmanuel A M; Huinck, Wendy J; Topsakal, Vedat; Grolman, Wilko

    OBJECTIVE: To study the cost-utility of simultaneous bilateral cochlear implantation (CI) versus unilateral CI. STUDY DESIGN: Randomized controlled trial (RCT). SETTING: Five tertiary referral centers. PATIENTS: Thirty-eight postlingually deafened adults eligible for cochlear implantation.

  1. A random utility based estimation framework for the household activity pattern problem.

    Science.gov (United States)

    2016-06-01

    This paper develops a random utility based estimation framework for the Household Activity : Pattern Problem (HAPP). Based on the realization that output of complex activity-travel decisions : form a continuous pattern in space-time dimension, the es...

  2. Malliavin's calculus in insider models: Additional utility and free lunches

    OpenAIRE

    Imkeller, Peter

    2002-01-01

    We consider simple models of financial markets with regular traders and insiders possessing some extra information hidden in a random variable which is accessible to the regular trader only at the end of the trading interval. The problems we focus on are the calculation of the additional utility of the insider and a study of his free lunch possibilities. The information drift, i.e. the drift to eliminate in order to preserve the martingale property in the insider's filtration, turns out to be...

  3. The parabolic Anderson model random walk in random potential

    CERN Document Server

    König, Wolfgang

    2016-01-01

    This is a comprehensive survey on the research on the parabolic Anderson model – the heat equation with random potential or the random walk in random potential – of the years 1990 – 2015. The investigation of this model requires a combination of tools from probability (large deviations, extreme-value theory, e.g.) and analysis (spectral theory for the Laplace operator with potential, variational analysis, e.g.). We explain the background, the applications, the questions and the connections with other models and formulate the most relevant results on the long-time behavior of the solution, like quenched and annealed asymptotics for the total mass, intermittency, confinement and concentration properties and mass flow. Furthermore, we explain the most successful proof methods and give a list of open research problems. Proofs are not detailed, but concisely outlined and commented; the formulations of some theorems are slightly simplified for better comprehension.

  4. Overlap Synchronisation in Multipartite Random Energy Models

    Science.gov (United States)

    Genovese, Giuseppe; Tantari, Daniele

    2017-12-01

    In a multipartite random energy model, made of a number of coupled generalised random energy models (GREMs), we determine the joint law of the overlaps in terms of the ones of the single GREMs. This provides the simplest example of the so-called overlap synchronisation.

  5. Modelling energy utilization for laying type Pullets

    Directory of Open Access Journals (Sweden)

    R Neme

    2005-03-01

    Full Text Available Three trials were carried out to determine energy metabolized (EM requirement model for starting and growing pullets from different strains, at five ambient temperatures and different percentage feather coverage. In Trial I, metabolizable energy requirements for maintenance (MEm and efficiency of energy utilization were estimated using 64 birds of two different strains, Hy-Line W36 (HLW36 and Hy-Line Semi-heavy (HLSH, from 9 to 13 weeks of age. The effects of ambient temperature (12, 18, 24, 30 and 36ºC and percentage feather coverage (0, 50 and 100% on MEm were assessed in the second trial, using 48 birds per temperature per strain (HLSH and HLW36 from 9 to 13 weeks of age. Trial III evaluated ME requirements for weight gain (MEg using 1,200 birds from two light strains (HLW36 and Hisex Light, HL and two semi-heavy strains (HLSH and Hisex Semi-heavy, HSH reared until 18 weeks of age. According to the prediction models, MEm changed as a function of temperature and feather coverage, whereas MEg changed as a function of age and bird strain. Thus, two models were developed for birds aged 1 to 6 weeks, one model for the light strain and one for the semi-heavy strain. Energy requirements (ER were different among strains from 7 to 12 weeks, and therefore 4 models were elaborated. From 13 to 18 weeks, one single model was produced for semi-heavy birds, since ER between semi-heavy strains were not different, whereas two different models were elaborated for the light layers. MEg of light birds was higher than MEg of semi-heavy birds, independent of age.

  6. Animal Models Utilized in HTLV-1 Research

    Directory of Open Access Journals (Sweden)

    Amanda R. Panfil

    2013-01-01

    Full Text Available Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1 over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP. Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, “humanized” mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.

  7. A segmentation of brain MRI images utilizing intensity and contextual information by Markov random field.

    Science.gov (United States)

    Chen, Mingsheng; Yan, Qingguang; Qin, Mingxin

    2017-12-01

    Image segmentation is a preliminary and fundamental step in computer aided magnetic resonance imaging (MRI) images analysis. But the performance of most current image segmentation methods is easily depreciated by noise in MRI images. A precise and anti-noise segmentation of MRI images is desired in modern medical image diagnosis. This paper presents a segmentation of MRI images which combines fuzzy clustering and Markov random field (MRF). In order to utilize gray level information sufficiently and alleviate noise disturbance, fuzzy clustering is carried out on the original image and the coarse scale image of multi-scale decomposition. The spatial constraints between neighboring pixels are modeled by a defined potential function in the MRF to reduce the effect of noise and increase the integrity of segmented regions. Spatial constraints and the gray level information refined by Fuzzy C-Means (FCM) algorithm are integrated by maximum a posteriori Markov random field (MAP-MRF). In the proposed method, the fuzzy clustering membership obtained from the original image and the coarse scale image is integrated into the single-site clique potential functions by MAP-MRF. The defined potential functions and the distance weight are introduced to model the neighborhood constraint with MRF. The experiments are carried out on noised synthetic images, simulated brain MR images and real MR images. The experimental results show that the proposed method has strong robustness and satisfying performance. Meanwhile the method is compared with FCM, FGFCM and FLICM algorithms visually and statistically in the experiments. In the comparison, the proposed method has achieved the best results. In the statistical comparison, the proposed method has an average similarity index of 36.8%, 33.7%, 2.75% increase against FCM, FGFCM and FLICM. This paper proposes a MRI segmentation method combining fuzzy clustering and Markov random field. The method is tested in the noised image databases and

  8. New specifications for exponential random graph models

    NARCIS (Netherlands)

    Snijders, Tom A. B.; Pattison, Philippa E.; Robins, Garry L.; Handcock, Mark S.; Stolzenberg, RM

    2006-01-01

    The most promising class of statistical models for expressing structural properties of social networks observed atone moment in time is the class of exponential random graph models (ERGMs), also known as p* models. The strong point of these models is that they can represent a variety of structural

  9. Comments on the random Thirring model

    Science.gov (United States)

    Berkooz, Micha; Narayan, Prithvi; Rozali, Moshe; Simón, Joan

    2017-09-01

    The Thirring model with random couplings is a translationally invariant generalisation of the SYK model to 1+1 dimensions, which is tractable in the large N limit. We compute its two point function, at large distances, for any strength of the random coupling. For a given realisation, the couplings contain both irrelevant and relevant marginal operators, but statistically, in the large N limit, the random couplings are overall always marginally irrelevant, in sharp distinction to the usual Thirring model. We show the leading term to the β function in conformal perturbation theory, which is quadratic in the couplings, vanishes, while its usually subleading cubic term matches our RG flow.

  10. Random matrix model for disordered conductors

    Indian Academy of Sciences (India)

    Keywords. Disordered conductors; random matrix theory; Dyson's Coulomb gas model. ... An interesting random walk problem associated with the joint probability distribution of the ensuing ensemble is discussed and its connection with level dynamics is brought out. It is further proved that Dyson's Coulomb gas analogy ...

  11. Modeling regulated water utility investment incentives

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing

  12. Supersymmetric SYK model and random matrix theory

    Science.gov (United States)

    Li, Tianlin; Liu, Junyu; Xin, Yuan; Zhou, Yehao

    2017-06-01

    In this paper, we investigate the effect of supersymmetry on the symmetry classification of random matrix theory ensembles. We mainly consider the random matrix behaviors in the N=1 supersymmetric generalization of Sachdev-Ye-Kitaev (SYK) model, a toy model for two-dimensional quantum black hole with supersymmetric constraint. Some analytical arguments and numerical results are given to show that the statistics of the supersymmetric SYK model could be interpreted as random matrix theory ensembles, with a different eight-fold classification from the original SYK model and some new features. The time-dependent evolution of the spectral form factor is also investigated, where predictions from random matrix theory are governing the late time behavior of the chaotic hamiltonian with supersymmetry.

  13. A new model of Random Regret Minimization

    NARCIS (Netherlands)

    Chorus, C.G.

    2010-01-01

    A new choice model is derived, rooted in the framework of Random Regret Minimization (RRM). The proposed model postulates that when choosing, people anticipate and aim to minimize regret. Whereas previous regret-based discrete choice-models assume that regret is experienced with respect to only the

  14. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  15. Information inefficiency in a random linear economy model

    CERN Document Server

    Jerico, Joao Pedro

    2016-01-01

    We study the effects of introducing information inefficiency in a model for a random linear economy with a representative consumer. This is done by considering statistical, instead of classical, economic general equilibria. Employing two different approaches we show that inefficiency increases the consumption set of a consumer but decreases her expected utility. In this scenario economic activity grows while welfare shrinks, that is the opposite of the behavior obtained by considering a rational consumer.

  16. Modelling of biomass utilization for energy purpose

    Energy Technology Data Exchange (ETDEWEB)

    Grzybek, Anna (ed.)

    2010-07-01

    the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland

  17. Cost-Utility of Bilateral Versus Unilateral Cochlear Implantation in Adults : A Randomized Controlled Trial

    NARCIS (Netherlands)

    Smulders, Yvette E.; van Zon, Alice; Stegeman, Inge; van Zanten, Gijsbert A.; Rinia, Albert B.; Stokroos, Robert J.; Free, Rolien H.; Maat, Bert; Frijns, Johan H. M.; Mylanus, Emmanuel A. M.; Huinck, Wendy J.; Topsakal, Vedat; Grolman, Wilko

    Objective:To study the cost-utility of simultaneous bilateral cochlear implantation (CI) versus unilateral CI.Study Design:Randomized controlled trial (RCT).Setting:Five tertiary referral centers.Patients:Thirty-eight postlingually deafened adults eligible for cochlear implantation.Interventions:A

  18. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  19. Nonparametric estimation in random sum models

    Directory of Open Access Journals (Sweden)

    Hassan S. Bakouch

    2013-05-01

    Full Text Available Let X1,X2,…,XN be independent, identically distributed, non-negative, integervalued random variables and let N be a non-negative, integer-valued random variable independent of X1,X2,…,XN . In this paper, we consider two nonparametric estimation problems for the random sum variable. The first is the estimation of the means of Xi and N based on the second-moment assumptions on distributions of Xi and N . The second is the nonparametric estimation of the distribution of Xi given a parametric model for the distribution of N . Some asymptotic properties of the proposed estimators are discussed.

  20. Modelling complex networks by random hierarchical graphs

    Directory of Open Access Journals (Sweden)

    M.Wróbel

    2008-06-01

    Full Text Available Numerous complex networks contain special patterns, called network motifs. These are specific subgraphs, which occur oftener than in randomized networks of Erdős-Rényi type. We choose one of them, the triangle, and build a family of random hierarchical graphs, being Sierpiński gasket-based graphs with random "decorations". We calculate the important characteristics of these graphs - average degree, average shortest path length, small-world graph family characteristics. They depend on probability of decorations. We analyze the Ising model on our graphs and describe its critical properties using a renormalization-group technique.

  1. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe...... a relation to the so-called uniform infinite tree and results on the Hausdorff and spectral dimension of two-dimensional space-time obtained in B. Durhuus, T. Jonsson, J.F. Wheater, J. Stat. Phys. 139, 859 (2010) are briefly outlined. For the latter we discuss results on the absence of spontaneous...

  2. A Dexterous Optional Randomized Response Model

    Science.gov (United States)

    Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai

    2017-01-01

    This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…

  3. Experimental Design of Formulations Utilizing High Dimensional Model Representation.

    Science.gov (United States)

    Li, Genyuan; Bastian, Caleb; Welsh, William; Rabitz, Herschel

    2015-07-23

    Many applications involve formulations or mixtures where large numbers of components are possible to choose from, but a final composition with only a few components is sought. Finding suitable binary or ternary mixtures from all the permissible components often relies on simplex-lattice sampling in traditional design of experiments (DoE), which requires performing a large number of experiments even for just tens of permissible components. The effect rises very rapidly with increasing numbers of components and can readily become impractical. This paper proposes constructing a single model for a mixture containing all permissible components from just a modest number of experiments. Yet the model is capable of satisfactorily predicting the performance for full as well as all possible binary and ternary component mixtures. To achieve this goal, we utilize biased random sampling combined with high dimensional model representation (HDMR) to replace DoE simplex-lattice design. Compared with DoE, the required number of experiments is significantly reduced, especially when the number of permissible components is large. This study is illustrated with a solubility model for solvent mixture screening.

  4. An inventory model with random demand

    Science.gov (United States)

    Mitsel, A. A.; Kritski, O. L.; Stavchuk, LG

    2017-01-01

    The article describes a three-product inventory model with random demand at equal frequencies of delivery. A feature of this model is that the additional purchase of resources required is carried out within the scope of their deficit. This fact allows reducing their storage costs. A simulation based on the data on arrival of raw and materials at an enterprise in Kazakhstan has been prepared. The proposed model is shown to enable savings up to 40.8% of working capital.

  5. Optimal Allocation in Stratified Randomized Response Model

    Directory of Open Access Journals (Sweden)

    Javid Shabbir

    2005-07-01

    Full Text Available A Warner (1965 randomized response model based on stratification is used to determine the allocation of samples. Both linear and log-linear cost functions are discussed under uni and double stratification. It observed that by using a log-linear cost function, one can get better allocations.

  6. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest ...

  7. Improving randomness characterization through Bayesian model selection.

    Science.gov (United States)

    Díaz Hernández Rojas, Rafael; Solís, Aldo; Angulo Martínez, Alí M; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Pérez Castillo, Isaac

    2017-06-08

    Random number generation plays an essential role in technology with important applications in areas ranging from cryptography to Monte Carlo methods, and other probabilistic algorithms. All such applications require high-quality sources of random numbers, yet effective methods for assessing whether a source produce truly random sequences are still missing. Current methods either do not rely on a formal description of randomness (NIST test suite) on the one hand, or are inapplicable in principle (the characterization derived from the Algorithmic Theory of Information), on the other, for they require testing all the possible computer programs that could produce the sequence to be analysed. Here we present a rigorous method that overcomes these problems based on Bayesian model selection. We derive analytic expressions for a model's likelihood which is then used to compute its posterior distribution. Our method proves to be more rigorous than NIST's suite and Borel-Normality criterion and its implementation is straightforward. We applied our method to an experimental device based on the process of spontaneous parametric downconversion to confirm it behaves as a genuine quantum random number generator. As our approach relies on Bayesian inference our scheme transcends individual sequence analysis, leading to a characterization of the source itself.

  8. A random walk model to evaluate autism

    Science.gov (United States)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  9. Stochastic model updating utilizing Bayesian approach and Gaussian process model

    Science.gov (United States)

    Wan, Hua-Ping; Ren, Wei-Xin

    2016-03-01

    Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.

  10. Animal models of asthma: utility and limitations

    Directory of Open Access Journals (Sweden)

    Aun MV

    2017-11-01

    Full Text Available Marcelo Vivolo Aun,1,2 Rafael Bonamichi-Santos,1,2 Fernanda Magalhães Arantes-Costa,2 Jorge Kalil,1 Pedro Giavina-Bianchi1 1Clinical Immunology and Allergy Division, Department of Internal Medicine, University of São Paulo School of Medicine, São Paulo, Brazil, 2Laboratory of Experimental Therapeutics (LIM20, Department of Internal Medicine, University of Sao Paulo, Sao Paulo, Brazil Abstract: Clinical studies in asthma are not able to clear up all aspects of disease pathophysiology. Animal models have been developed to better understand these mechanisms and to evaluate both safety and efficacy of therapies before starting clinical trials. Several species of animals have been used in experimental models of asthma, such as Drosophila, rats, guinea pigs, cats, dogs, pigs, primates and equines. However, the most common species studied in the last two decades is mice, particularly BALB/c. Animal models of asthma try to mimic the pathophysiology of human disease. They classically include two phases: sensitization and challenge. Sensitization is traditionally performed by intraperitoneal and subcutaneous routes, but intranasal instillation of allergens has been increasingly used because human asthma is induced by inhalation of allergens. Challenges with allergens are performed through aerosol, intranasal or intratracheal instillation. However, few studies have compared different routes of sensitization and challenge. The causative allergen is another important issue in developing a good animal model. Despite being more traditional and leading to intense inflammation, ovalbumin has been replaced by aeroallergens, such as house dust mites, to use the allergens that cause human disease. Finally, researchers should define outcomes to be evaluated, such as serum-specific antibodies, airway hyperresponsiveness, inflammation and remodeling. The present review analyzes the animal models of asthma, assessing differences between species, allergens and routes

  11. Evaluation of Usability Utilizing Markov Models

    Science.gov (United States)

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  12. Latent Utility Shocks in a Structural Empirical Asset Pricing Model

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Raahauge, Peter

    . We find that current dividendsdo not forecast future utility shocks, whereas current utility shocks do forecastfuture dividends. The estimated structural model produces a sequence of predictedutility shocks which provide better forecasts of future long-horizon stock market returnsthan the classical...

  13. On utility maximization in discrete-time financial market models

    OpenAIRE

    Miklos Rasonyi; Lukasz Stettner

    2005-01-01

    We consider a discrete-time financial market model with finite time horizon and give conditions which guarantee the existence of an optimal strategy for the problem of maximizing expected terminal utility. Equivalent martingale measures are constructed using optimal strategies.

  14. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  15. Estimating Random Regret Minimization models in the route choice context

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    The discrete choice paradigm of random regret minimization (RRM) has been recently proposed in several choice contexts. In the route choice context, the paradigm has been used to model the choice among three routes, to define regret-based equilibrium in risky conditions, and to formulate regret......-based stochastic user equilibrium. However, in the same context the RRM literature has not confronted three major challenges: (i) accounting for similarity across alternative routes, (ii) analyzing choice set composition effects on choice probabilities, and (iii) comparing the RRM model with advanced RUM...... counterparts. This paper looks into RRM-based route choice models from these three perspectives by (i) proposing utility-based and regret-based correction terms to account for similarity across alternatives, (ii) analyzing the variation of choice set probabilities with the choice set composition, and (iii...

  16. Spartan random processes in time series modeling

    Science.gov (United States)

    Žukovič, M.; Hristopulos, D. T.

    2008-06-01

    A Spartan random process (SRP) is used to estimate the correlation structure of time series and to predict (interpolate and extrapolate) the data values. SRPs are motivated from statistical physics, and they can be viewed as Ginzburg-Landau models. The temporal correlations of the SRP are modeled in terms of ‘interactions’ between the field values. Model parameter inference employs the computationally fast modified method of moments, which is based on matching sample energy moments with the respective stochastic constraints. The parameters thus inferred are then compared with those obtained by means of the maximum likelihood method. The performance of the Spartan predictor (SP) is investigated using real time series of the quarterly S&P 500 index. SP prediction errors are compared with those of the Kolmogorov-Wiener predictor. Two predictors, one of which is explicit, are derived and used for extrapolation. The performance of the predictors is similarly evaluated.

  17. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  18. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  19. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    Bio-decolorization kinetic studies of distillery effluent in a batch culture were conducted using Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, Leudeking-Piret kinetics for bio-decolorization, and also for substrate utilization. The proposed models appeared to provide a suitable ...

  20. Kinetic Models with Randomly Perturbed Binary Collisions

    Science.gov (United States)

    Bassetti, Federico; Ladelli, Lucia; Toscani, Giuseppe

    2011-02-01

    We introduce a class of Kac-like kinetic equations on the real line, with general random collisional rules which, in some special cases, identify models for granular gases with a background heat bath (Carrillo et al. in Discrete Contin. Dyn. Syst. 24(1):59-81, 2009), and models for wealth redistribution in an agent-based market (Bisi et al. in Commun. Math. Sci. 7:901-916, 2009). Conditions on these collisional rules which guarantee both the existence and uniqueness of equilibrium profiles and their main properties are found. The characterization of these stationary states is of independent interest, since we show that they are stationary solutions of different evolution problems, both in the kinetic theory of rarefied gases (Cercignani et al. in J. Stat. Phys. 105:337-352, 2001; Villani in J. Stat. Phys. 124:781-822, 2006) and in the econophysical context (Bisi et al. in Commun. Math. Sci. 7:901-916, 2009).

  1. Bridges in the random-cluster model

    Directory of Open Access Journals (Sweden)

    Eren Metin Elçi

    2016-02-01

    Full Text Available The random-cluster model, a correlated bond percolation model, unifies a range of important models of statistical mechanics in one description, including independent bond percolation, the Potts model and uniform spanning trees. By introducing a classification of edges based on their relevance to the connectivity we study the stability of clusters in this model. We prove several exact relations for general graphs that allow us to derive unambiguously the finite-size scaling behavior of the density of bridges and non-bridges. For percolation, we are also able to characterize the point for which clusters become maximally fragile and show that it is connected to the concept of the bridge load. Combining our exact treatment with further results from conformal field theory, we uncover a surprising behavior of the (normalized variance of the number of (non-bridges, showing that it diverges in two dimensions below the value 4cos2⁡(π/3=0.2315891⋯ of the cluster coupling q. Finally, we show that a partial or complete pruning of bridges from clusters enables estimates of the backbone fractal dimension that are much less encumbered by finite-size corrections than more conventional approaches.

  2. Random graph models for dynamic networks

    Science.gov (United States)

    Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.

    2017-10-01

    Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.

  3. A prospective randomized trial examining health care utilization in individuals using multiple smartphone-enabled biosensors

    Directory of Open Access Journals (Sweden)

    Cinnamon S. Bloss

    2016-01-01

    Full Text Available Background. Mobile health and digital medicine technologies are becoming increasingly used by individuals with common, chronic diseases to monitor their health. Numerous devices, sensors, and apps are available to patients and consumers–some of which have been shown to lead to improved health management and health outcomes. However, no randomized controlled trials have been conducted which examine health care costs, and most have failed to provide study participants with a truly comprehensive monitoring system. Methods. We conducted a prospective randomized controlled trial of adults who had submitted a 2012 health insurance claim associated with hypertension, diabetes, and/or cardiac arrhythmia. The intervention involved receipt of one or more mobile devices that corresponded to their condition(s (hypertension: Withings Blood Pressure Monitor; diabetes: Sanofi iBGStar Blood Glucose Meter; arrhythmia: AliveCor Mobile ECG and an iPhone with linked tracking applications for a period of 6 months; the control group received a standard disease management program. Moreover, intervention study participants received access to an online health management system which provided participants detailed device tracking information over the course of the study. This was a monitoring system designed by leveraging collaborations with device manufacturers, a connected health leader, health care provider, and employee wellness program–making it both unique and inclusive. We hypothesized that health resource utilization with respect to health insurance claims may be influenced by the monitoring intervention. We also examined health-self management. Results & Conclusions. There was little evidence of differences in health care costs or utilization as a result of the intervention. Furthermore, we found evidence that the control and intervention groups were equivalent with respect to most health care utilization outcomes. This result suggests there are not large

  4. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

  5. Economic Valuation on Change of Tourism Quality in Rawapening, Indonesia: An Application of Random Utility Method

    Science.gov (United States)

    Subanti, S.; Irawan, B. R. M. B.; Sasongko, G.; Hakim, A. R.

    2017-04-01

    This study aims to determine the profit (loss) earned economic actors tourism activities if the condition or quality of tourism in Rawapening be improved (deteriorated). Change condition or quality can be seen by traveling expenses, natural environment, Japanese cultural performances, and traditional markets. The method used to measure changes in the economic benefits or economic loss with a random utility approach. The study was found that travel cost, natural environment, Japanese cultural performances, and traditional markets have significant factors about respondent preferences to choose the change of tourism condition. The value of compensation received by visitors as a result of changes in conditions improved by 2,932 billion, while the change in the condition worsens by 2,628 billion. Recommendation of this study is the local government should consider environmental factors in the formulation of tourism development in Rawapening.

  6. Electrical utilities model for determining electrical distribution capacity

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, R. L.

    1997-09-03

    In its simplest form, this model was to obtain meaningful data on the current state of the Site`s electrical transmission and distribution assets, and turn this vast collection of data into useful information. The resulting product is an Electrical Utilities Model for Determining Electrical Distribution Capacity which provides: current state of the electrical transmission and distribution systems; critical Hanford Site needs based on outyear planning documents; decision factor model. This model will enable Electrical Utilities management to improve forecasting requirements for service levels, budget, schedule, scope, and staffing, and recommend the best path forward to satisfy customer demands at the minimum risk and least cost to the government. A dynamic document, the model will be updated annually to reflect changes in Hanford Site activities.

  7. The changing utility workforce and the emergence of building information modeling in utilities

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, A. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Utilities are faced with the extensive replacement of a workforce that is now reaching retirement age. New personnel will have varying skill levels and different expectations in relation to design tools. This paper discussed methods of facilitating knowledge transfer from the retiring workforce to new staff using rules-based design software. It was argued that while nothing can replace the experiential knowledge of long-term engineers, software with built-in validations can accelerate training and building information modelling (BIM) processes. Younger personnel will expect a user interface paradigm that is based on their past gaming and work experiences. Visualization, simulation, and modelling approaches were reviewed. 3 refs.

  8. Maximizing the model for Discounted Stream of Utility from ...

    African Journals Online (AJOL)

    Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...

  9. Utilizing Mind-Maps for Information Retrieval and User Modelling

    OpenAIRE

    Beel, Joeran; Langer, Stefan; Genzmehr, Marcel; Gipp, Bela

    2014-01-01

    Mind-maps have been widely neglected by the information retrieval (IR) community. However, there are an estimated two million active mind-map users, who create 5 million mind-maps every year, of which a total of 300,000 is publicly available. We believe this to be a rich source for information retrieval applications, and present eight ideas on how mind-maps could be utilized by them. For instance, mind-maps could be utilized to generate user models for recommender systems or expert search, or...

  10. A randomized trial of the clinical utility of genetic testing for obesity: design and implementation considerations.

    Science.gov (United States)

    Wang, Catharine; Gordon, Erynn S; Stack, Catharine B; Liu, Ching-Ti; Norkunas, Tricia; Wawak, Lisa; Christman, Michael F; Green, Robert C; Bowen, Deborah J

    2014-02-01

    Obesity rates in the United States have escalated in recent decades and present a major challenge in public health prevention efforts. Currently, testing to identify genetic risk for obesity is readily available through several direct-to-consumer companies. Despite the availability of this type of testing, there is a paucity of evidence as to whether providing people with personal genetic information on obesity risk will facilitate or impede desired behavioral responses. We describe the key issues in the design and implementation of a randomized controlled trial examining the clinical utility of providing genetic risk information for obesity. Participants are being recruited from the Coriell Personalized Medicine Collaborative, an ongoing, longitudinal research cohort study designed to determine the utility of personal genome information in health management and clinical decision making. The primary focus of the ancillary Obesity Risk Communication Study is to determine whether genetic risk information added value to traditional communication efforts for obesity, which are based on lifestyle risk factors. The trial employs a 2 × 2 factorial design in order to examine the effects of providing genetic risk information for obesity, alone or in combination with lifestyle risk information, on participants' psychological responses, behavioral intentions, health behaviors, and weight. The factorial design generated four experimental arms based on communication of estimated risk to participants: (1) no risk feedback (control), (2) genetic risk only, (3) lifestyle risk only, and (4) both genetic and lifestyle risk (combined). Key issues in study design pertained to the selection of algorithms to estimate lifestyle risk and determination of information to be provided to participants assigned to each experimental arm to achieve a balance between clinical standards and methodological rigor. Following the launch of the trial in September 2011, implementation challenges

  11. Percutaneous laser disc decompression versus microdiscectomy for sciatica: Cost utility analysis alongside a randomized controlled trial.

    Science.gov (United States)

    van den Akker-van Marle, M Elske; Brouwer, Patrick A; Brand, Ronald; Koes, Bart; van den Hout, Wilbert B; van Buchem, Mark A; Peul, Wilco C

    2017-10-01

    Background Percutaneous laser disc decompression (PLDD) for patients with lumbar disc herniation is believed to be cheaper than surgery. However, cost-effectiveness has never been studied. Materials and Methods A cost utility analysis was performed alongside a randomized controlled trial comparing PLDD and conventional surgery. Patients reported their quality of life using the EuroQol five dimensions questionnaire (EQ-5D), 36-item short form health survey (SF-36 and derived SF-6D) and a visual analogue scale (VAS). Using cost diaries patients reported health care use, non-health care use and hours of absenteeism from work. The 1-year societal costs were compared with 1-year quality adjusted life years (QALYs) based on the United States (US) EQ-5D. Sensitivity analyses were carried out on the use of different utility measures (Netherland (NL) EQ-5D, SF-6D, or VAS) and on the perspective (societal or healthcare). Results On the US EQ-5D, conventional surgery provided a non-significant gain in QALYs of 0.033 (95% confidence interval (CI) -0.026 to 0.093) in the first year. PLDD resulted in significantly lower healthcare costs (difference €1771, 95% CI €303 to €3238) and non-significantly lower societal costs (difference €2379, 95% CI -€2860 to €7618). For low values of the willingness to pay for a QALY, the probability of being cost-effective is in favor of PLDD. For higher values of the willingness to pay, between €30,000 and €70,000, conventional microdiscectomy becomes favorable. Conclusions From a healthcare perspective PLDD, followed by surgery when needed, results in significantly lower 1-year costs than conventional surgery. From a societal perspective PLDD appears to be an economically neutral innovation.

  12. The Utility of AISA Eagle Hyperspectral Data and Random Forest Classifier for Flower Mapping

    Directory of Open Access Journals (Sweden)

    Elfatih M. Abdel-Rahman

    2015-10-01

    Full Text Available Knowledge of the floral cycle and the spatial distribution and abundance of flowering plants is important for bee health studies to understand the relationship between landscape and bee hive productivity and honey flow. The key objective of this study was to show how AISA Eagle hyperspectral data and random forest (RF can be optimally utilized to produce flowering and spatially explicit land use/land cover (LULC maps for a study site in Kenya. AISA Eagle imagery was captured at the early flowering period (January 2014 and at the peak flowering season (February 2013. Data on white and yellow flowering trees as well as LULC classes in the study area were collected and used as ground-truth points. We utilized all 64 AISA Eagle bands and also used variable importance in RF to identify the most important bands in both AISA Eagle data sets. The results showed that flowering was most accurately mapped using the AISA Eagle data from the peak flowering period (85.71%–88.15% overall accuracy for the peak flowering season imagery versus 80.82%–83.67% for the early flowering season. The variable optimization (i.e., variable selection analysis showed that less than half of the AISA bands (n = 26 for the February 2013 data and n = 21 for the January 2014 data were important to attain relatively reliable classification accuracies. Our study is an important first step towards the development of operational flower mapping routines and for understanding the relationship between flowering and bees’ foraging behavior.

  13. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  14. CONTROVERSIES REGARDING THE UTILIZATION OF ALTMAN MODEL IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Mihaela ONOFREI

    2012-06-01

    Full Text Available Altman model was built for U.S. companies, based on the characteristics of that economy. Promising results were obtained in other countries such as Britain, Australia, Canada, Finland, Germany, Israel, Norway, India, South Korea; the percentage is over 80% predictability. However, as can be seen, they have an Anglo-Saxon legal system and also the economic environment is highly developed. While there is no reason why this model can be applied to companies in the whole world, we recognize that each has its own peculiarities economic environment, therefore, local models forecast could be better than American models, at least in their testing phase. But the utilization of Altman model is suitable for the Romanian economy? Taking this into account, the purpose of this paper is to test the Altman model on the Romanian market.

  15. Recent advances in modeling nutrient utilization in ruminants.

    Science.gov (United States)

    Kebreab, E; Dijkstra, J; Bannink, A; France, J

    2009-04-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, postabsorptive metabolism, and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient utilization and excretion with respect to environmental issues. Accurate prediction of fermentation stoichiometry has an impact on estimating the type of energy-yielding substrate available to the animal, and the ratio of lipogenic to glucogenic VFA is an important determinant of methanogenesis. Recent advances in modeling VFA stoichiometry offer ways for dietary manipulation to shift the fermentation in favor of glucogenic VFA. Increasing energy to the animal by supplementing with starch can lead to health problems such as subacute rumen acidosis caused by rumen pH depression. Mathematical models have been developed to describe changes in rumen pH and rumen fermentation. Models that relate rumen temperature to rumen pH have also been developed and have the potential to aid in the diagnosis of subacute rumen acidosis. The effect of pH has been studied mechanistically, and in such models, fractional passage rate has a large impact on substrate degradation and microbial efficiency in the rumen and should be an important theme in future studies. The efficiency with which energy is utilized by ruminants has been updated in recent studies. Mechanistic models of N utilization indicate that reducing dietary protein concentration, matching protein degradability to the microbial requirement, and increasing the energy status of the animal will reduce the output of N as waste. Recent mechanistic P models calculate the P requirement by taking into account P recycled through saliva and endogenous losses. Mechanistic P models suggest reducing current P amounts for lactating dairy cattle to at least 0.35% P in the diet, with a potential reduction of up to 1.3 kt/yr. A model that

  16. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E.

    2016-01-01

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the ‘Average Predicted Value’ method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. PMID:27449636

  17. DISPERSION POLYMERIZATION OF STYRENE IN SUPERCRITICAL CARBON DIOXIDE UTILIZING RANDOM COPOLYMERS INCLUDING FLUORINATED ACRYLATE FOR PREPARING MICRON-SIZE POLYSTYRENE PARTICLES. (R826115)

    Science.gov (United States)

    The dispersion polymerization of styrene in supercritical CO2 utilizing CO2-philic random copolymers was investigated. The resulting high yield of polystyrene particles in the micron-size range was formed using various random copolymers as stabilizers. The p...

  18. A Note on the Correlated Random Coefficient Model

    DEFF Research Database (Denmark)

    Kolodziejczyk, Christophe

    In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...

  19. The Analysis of Random Effects in Modeling Studies.

    Science.gov (United States)

    Scheirer, C. James; Geller, Sanford E.

    1979-01-01

    Argues that in research on the effects of modeling, models must be analyzed as a random factor in order to avoid a positive bias in the results. The concept of a random factor is discussed, worked examples are provided, and a practical solution to the problem is proposed. (JMB)

  20. Compensatory and non-compensatory multidimensional randomized item response models

    NARCIS (Netherlands)

    Fox, J.P.; Entink, R.K.; Avetisyan, M.

    2014-01-01

    Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been

  1. A random energy model for size dependence : recurrence vs. transience

    NARCIS (Netherlands)

    Külske, Christof

    1998-01-01

    We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as

  2. Modeling of ultrasonic processes utilizing a generic software framework

    Science.gov (United States)

    Bruns, P.; Twiefel, J.; Wallaschek, J.

    2017-06-01

    Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.

  3. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  4. Encoding sequential information in semantic space models: comparing holographic reduced representation and random permutation.

    Science.gov (United States)

    Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N

    2015-01-01

    Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, "noisy" permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  5. The Ising model on random lattices in arbitrary dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Bonzom, Valentin, E-mail: vbonzom@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline St. N, ON N2L 2Y5, Waterloo (Canada); Gurau, Razvan, E-mail: rgurau@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline St. N, ON N2L 2Y5, Waterloo (Canada); Rivasseau, Vincent, E-mail: vincent.rivasseau@gmail.com [Laboratoire de Physique Theorique, CNRS UMR 8627, Universite Paris XI, F-91405 Orsay Cedex (France)

    2012-05-01

    We study analytically the Ising model coupled to random lattices in dimension three and higher. The family of random lattices we use is generated by the large N limit of a colored tensor model generalizing the two-matrix model for Ising spins on random surfaces. We show that, in the continuum limit, the spin system does not exhibit a phase transition at finite temperature, in agreement with numerical investigations. Furthermore we outline a general method to study critical behavior in colored tensor models.

  6. Utility of Small Animal Models of Developmental Programming.

    Science.gov (United States)

    Reynolds, Clare M; Vickers, Mark H

    2018-01-01

    Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.

  7. Orlistat for the treatment of obesity: cost utility model.

    Science.gov (United States)

    Foxcroft, D R

    2005-11-01

    This study aimed to assess the cost utility of orlistat treatment based on (i) criteria from recent guidance from the National Institute for Clinical Excellence (NICE) for England and Wales (treatment discontinued if weight loss < 5% at 3 months; and < 10% at 6 months); and (ii) alternative criteria from the European Agency for the Evaluation of Medicinal Products (EMEA) licence for orlistat prescription in the European Community (treatment discontinued if weight loss < 5% at 3 months). Subjects were 1398 obese individuals who participated in three large European Phase III trials of orlistat treatment for adults (BMI: 28-47 kg m(-2)). Measures were: response to treatment in orlistat and placebo treatment groups; health benefit expressed as quality adjusted life years (QALYs) gained associated with weight loss; costs associated with orlistat treatment. In the cost utility model with multiway sensitivity analysis, the cost/QALY gained using the NICE criteria was estimated to be 24,431 pounds (sensitivity analysis range: 10,856 to 77,197 pounds). The cost/QALY gained using the alternative EMEA criteria was estimated to be 19,005 pounds (range: 8,840 to 57,798 pounds). In conclusion, NICE guidance for the continued use of orlistat was supported in this updated cost utility model, comparing favourably with a previously published estimate of 45,881 pounds per QALY gained. Moreover, the value for money of orlistat treatment is improved further if EMEA treatment criteria for continued orlistat treatment are applied. The EMEA criteria should be considered in any future changes to the NICE guidance or in guidance issued by similar agencies.

  8. Utilization of services in a randomized trial testing phone- and web-based interventions for smoking cessation.

    Science.gov (United States)

    Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E

    2011-05-01

    Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.

  9. ON THE UTILITY OF SORNETTE’S CRASH PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    IOAN ROXANA

    2015-10-01

    Full Text Available Stock market crashes have been a constant subject of interest among capital market researchers. Crashes’ behavior has been largely studied, but the problem that remained unsolved until recently, was that of a prediction algorithm. Stock market crashes are complex and global events, rarely taking place on a singular national capital market. They usually occur simultaneously on several if not most capital markets, implying important losses among the investors. Investments made within various stock markets have an extremely important role within the global economy, influencing people’s lives in many ways. Presently, stock market crashes are being studied with great interest, not only because of the necessity of a deep understanding of the phenomenon, but also because of the fact that these crashes belong to the so-called category of “extreme phenomena”. Those are the main reasons that determined scientists to try building mathematical models for crashes prediction. Such a model was built by Professor Didier Sornette, inspired and adapted from an earthquake detection model. Still, the model keeps many characteristics of its predecessor, not being fully adapted to the economic realities and demands, or to the stock market’s characteristics. This paper attempts to test the utility of the model in predicting Bucharest Stock Exchange’s price falls, as well as the possibility of it being successfully used by investors.

  10. The Model of E-technology utilization for SMEs

    Directory of Open Access Journals (Sweden)

    Roman Malo

    2012-01-01

    Full Text Available Today, small and medium-sized enterprises implement e-technology in order to improve business performance and competitiveness. As to achieve the maximum level of benefits from e-technology implementation, it is important to understand well to all suitable e-activities, their possibilities and also opportunity for business.In this paper, the general model of e-technology utilization in the form of partial e-technologies is described. This general model was proposed for better understanding of the possibilities small and medium-sized enterprises can reach by usage of e-technologies.Based on a synthesis of related literatures, an analysis of various data from reachable researches, surveys of enterprises and a qualitative study of referential business subjects, we identified several e-activities which are dominant specially for SMEs, external subjects connected to these activities (customers, suppliers, partners and also groups of IS/ICT necessary to support identified e-activities. All these information were summarized and adopted in the general model.The model provides basic platform for our research within e-technology and simple formal structure for analysis and comparison of real businesses in this scope. Although, in this paper the SME area is considered, the model itself is general.

  11. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  12. Modeling Gene Regulation in Liver Hepatocellular Carcinoma with Random Forests

    National Research Council Canada - National Science Library

    Hilal Kazan

    2016-01-01

    .... We developed a random forest model that incorporates copy-number variation, DNA methylation, transcription factor, and microRNA binding information as features to predict gene expression in HCC...

  13. Generic Model to Send Secure Alerts for Utility Companies

    Directory of Open Access Journals (Sweden)

    Perez–Díaz J.A.

    2010-04-01

    Full Text Available In some industries such as logistics services, bank services, and others, the use of automated systems that deliver critical business information anytime and anywhere play an important role in the decision making process. This paper introduces a "Generic model to send secure alerts and notifications", which operates as a middleware between enterprise data sources and its mobile users. This model uses Short Message Service (SMS as its main mobile messaging technology, however is open to use new types of messaging technologies. Our model is interoperable with existing information systems, it can store any kind of information about alerts or notifications at different levels of granularity, it offers different types of notifications (as analert when critical business problems occur,asanotificationina periodical basis or as 2 way query. Notification rules can be customized by final users according to their preferences. The model provides a security framework in the cases where information requires confidentiality, it is extensible to existing and new messaging technologies (like e–mail, MMS, etc. It is a platform, mobile operator and hardware independent. Currently, our solution is being used at the Comisión Federal de Electricidad (Mexico's utility company to deliver secure alerts related to critical events registered in the main power generation plants of our country.

  14. Planning school based sexuality programs utilizing the PRECEDE model.

    Science.gov (United States)

    Rubinson, L; Baillie, L

    1981-04-01

    Both substantive and methodological issues in teenage sexuality were explored. Specific study purposes were the following: 1) to explore the effectiveness of a community needs assessment to reduce traditional negative parental input and facilitating the use of the schools as public health resources; 2) to identify substantive targets of intervention for curriculum development which would be accepted and supported by both parents and teenagers; and 3) to develop a methodology of assessment where the PRECEDE model would be tested for its utility in identifying educational targets in a complex and multidimensional area such as human sexuality and to quantify the model's categories to identify major differences in perceptions between parents and teenagers, and to explore underlying constructs in teenage sexuality. The PRECEDE model provides an assessment of social and epidemiological factors as well as an investigation of the behavioral causes of the outcomes as identified by the community. 50% of the population of 600 families living in a small midwestern community were surveyed: 204 parents and 210 teenagers responded to the survey. The following were among the major findings: 1) a community needs assessment using the PRECEDE model was effective in reducing negative political input and in facilitating the use of the schools as public health resources; 2) there were major needs, especially in the identification of important social problems; and 3) 24% of the teenagers reported they were sexually active while only 10% of the parents perceived their children as being sexually active.

  15. Utilization of random process spectral properties for the calculation of fatigue life under combined loading

    National Research Council Canada - National Science Library

    Svoboda J; Balda M; Fröhlich V

    2009-01-01

    ... of forces and moments of random character. Considering the fracture mechanics theory, then the damaging of material is both in the micro- and macro-plastic area connected with the rise of plastic deformation and hence with the plastic...

  16. Superstatistical analysis and modelling of heterogeneous random walks

    Science.gov (United States)

    Metzner, Claus; Mark, Christoph; Steinwachs, Julian; Lautscham, Lena; Stadler, Franz; Fabry, Ben

    2015-06-01

    Stochastic time series are ubiquitous in nature. In particular, random walks with time-varying statistical properties are found in many scientific disciplines. Here we present a superstatistical approach to analyse and model such heterogeneous random walks. The time-dependent statistical parameters can be extracted from measured random walk trajectories with a Bayesian method of sequential inference. The distributions and correlations of these parameters reveal subtle features of the random process that are not captured by conventional measures, such as the mean-squared displacement or the step width distribution. We apply our new approach to migration trajectories of tumour cells in two and three dimensions, and demonstrate the superior ability of the superstatistical method to discriminate cell migration strategies in different environments. Finally, we show how the resulting insights can be used to design simple and meaningful models of the underlying random processes.

  17. Modeling a Packed Bed Reactor Utilizing the Sabatier Process

    Science.gov (United States)

    Shah, Malay G.; Meier, Anne J.; Hintze, Paul E.

    2017-01-01

    A numerical model is being developed using Python which characterizes the conversion and temperature profiles of a packed bed reactor (PBR) that utilizes the Sabatier process; the reaction produces methane and water from carbon dioxide and hydrogen. While the specific kinetics of the Sabatier reaction on the RuAl2O3 catalyst pellets are unknown, an empirical reaction rate equation1 is used for the overall reaction. As this reaction is highly exothermic, proper thermal control is of the utmost importance to ensure maximum conversion and to avoid reactor runaway. It is therefore necessary to determine what wall temperature profile will ensure safe and efficient operation of the reactor. This wall temperature will be maintained by active thermal controls on the outer surface of the reactor. Two cylindrical PBRs are currently being tested experimentally and will be used for validation of the Python model. They are similar in design except one of them is larger and incorporates a preheat loop by feeding the reactant gas through a pipe along the center of the catalyst bed. The further complexity of adding a preheat pipe to the model to mimic the larger reactor is yet to be implemented and validated; preliminary validation is done using the smaller PBR with no reactant preheating. When mapping experimental values of the wall temperature from the smaller PBR into the Python model, a good approximation of the total conversion and temperature profile has been achieved. A separate CFD model incorporates more complex three-dimensional effects by including the solid catalyst pellets within the domain. The goal is to improve the Python model to the point where the results of other reactor geometry can be reasonably predicted relatively quickly when compared to the much more computationally expensive CFD approach. Once a reactor size is narrowed down using the Python approach, CFD will be used to generate a more thorough prediction of the reactors performance.

  18. A mangrove creek restoration plan utilizing hydraulic modeling.

    Science.gov (United States)

    Marois, Darryl E; Mitsch, William J

    2017-11-01

    Despite the valuable ecosystem services provided by mangrove ecosystems they remain threatened around the globe. Urban development has been a primary cause for mangrove destruction and deterioration in south Florida USA for the last several decades. As a result, the restoration of mangrove forests has become an important topic of research. Using field sampling and remote-sensing we assessed the past and present hydrologic conditions of a mangrove creek and its connected mangrove forest and brackish marsh systems located on the coast of Naples Bay in southwest Florida. We concluded that the hydrology of these connected systems had been significantly altered from its natural state due to urban development. We propose here a mangrove creek restoration plan that would extend the existing creek channel 1.1 km inland through the adjacent mangrove forest and up to an adjacent brackish marsh. We then tested the hydrologic implications using a hydraulic model of the mangrove creek calibrated with tidal data from Naples Bay and water levels measured within the creek. The calibrated model was then used to simulate the resulting hydrology of our proposed restoration plan. Simulation results showed that the proposed creek extension would restore a twice-daily flooding regime to a majority of the adjacent mangrove forest and that there would still be minimal tidal influence on the brackish marsh area, keeping its salinity at an acceptable level. This study demonstrates the utility of combining field data and hydraulic modeling to aid in the design of mangrove restoration plans.

  19. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  20. Random-Field Model of a Cooper Pair Insulator

    Science.gov (United States)

    Proctor, Thomas; Chudnovsky, Eugene; Garanin, Dmitry

    2013-03-01

    The model of a disordered superconducting film with quantum phase fluctuations is mapped on a random-field XY spin model in 2+1 dimensions. Analytical studies within continuum field theory, supported by our recent numerical calculations on discrete lattices, show the onset of the low-temperature Cooper pair insulator phase. The constant external field in the random-field spin model maps on the Josephson coupling between the disordered film and a bulk superconductor. Such a coupling, if sufficiently strong, restores superconductivity in the film. This provides an experimental test for the quantum fluctuation model of a superinsulator.

  1. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  2. Impact of prenatal education on maternal utilization of analgesic interventions at future infant vaccinations: a cluster randomized trial.

    Science.gov (United States)

    Taddio, Anna; Smart, Sarah; Sheedy, Matthuschka; Yoon, Eugene W; Vyas, Charmy; Parikh, Chaitya; Pillai Riddell, Rebecca; Shah, Vibhuti

    2014-07-01

    Analgesic interventions are not routinely used during vaccine injections in infants. Parents report a desire to mitigate injection pain, but lack the knowledge about how to do so. The objective of this cluster-randomized trial was to evaluate the effect of a parent-directed prenatal education teaching module about vaccination pain management on analgesic utilization at future infant vaccinations. Expectant mothers enrolled in prenatal classes at Mount Sinai Hospital in Toronto were randomized to a 20-30minute interactive presentation about vaccination pain management (experimental group) or general vaccination information (control group). Both presentations included a PowerPoint (Microsoft Corporation, Redmond, WA, USA) and video presentation, take-home pamphlet, and "Question and Answer" period. The primary outcome was self-reported utilization of breastfeeding, sugar water, or topical anaesthetics at routine 2-month infant vaccinations. Between October 2012 and July 2013, 197 expectant mothers from 28 prenatal classes participated; follow-up was obtained in 174 (88%). Maternal characteristics did not differ (P>0.05) between groups. Utilization of one or more prespecified pain interventions occurred in 34% of participants in the experimental group, compared to 17% in the control group (P=0.01). Inclusion of a pain management module in prenatal classes led to increased utilization of evidence-based pain management interventions by parents at the 2-month infant vaccination appointment. Educating parents offers a novel and effective way of improving the quality of pain care delivered to infants during vaccination. Additional research is needed to determine if utilization can be bolstered further using techniques such as postnatal hospital reinforcement, reminder cards, and clinician education. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  3. Model C critical dynamics of random anisotropy magnets

    Energy Technology Data Exchange (ETDEWEB)

    Dudka, M [Institute for Condensed Matter Physics, National Acad. Sci. of Ukraine, UA-79011 Lviv (Ukraine); Folk, R [Institut fuer Theoretische Physik, Johannes Kepler Universitaet Linz, A-4040 Linz (Austria); Holovatch, Yu [Institute for Condensed Matter Physics, National Acad. Sci. of Ukraine, UA-79011 Lviv (Ukraine); Moser, G [Institut fuer Physik und Biophysik, Universitaet Salzburg, A-5020 Salzburg (Austria)

    2007-07-20

    We study the relaxational critical dynamics of the three-dimensional random anisotropy magnets with the non-conserved n-component order parameter coupled to a conserved scalar density. In the random anisotropy magnets, the structural disorder is present in the form of local quenched anisotropy axes of random orientation. When the anisotropy axes are randomly distributed along the edges of the n-dimensional hypercube, asymptotical dynamical critical properties coincide with those of the random-site Ising model. However the structural disorder gives rise to considerable effects for non-asymptotic critical dynamics. We investigate this phenomenon by a field-theoretical renormalization group analysis in the two-loop order. We study critical slowing down and obtain quantitative estimates for the effective and asymptotic critical exponents of the order parameter and scalar density. The results predict complex scenarios for the effective critical exponent approaching the asymptotic regime.

  4. Random matrix model for disordered conductors

    Indian Academy of Sciences (India)

    1. Introduction. Matrix models are being successfully employed in a variety of domains of physics includ- ing studies on heavy nuclei [1], mesoscopic disordered conductors [2,3], two-dimensional quantum gravity [4], and chaotic quantum systems [5]. Universal conductance fluctuations in metals [6] and spectral fluctuations in ...

  5. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  6. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    2009-01-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  7. Simulating intrafraction prostate motion with a random walk model

    Directory of Open Access Journals (Sweden)

    Tobias Pommer, PhD

    2017-07-01

    Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.

  8. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  9. Enhanced recovery pathways optimize health outcomes and resource utilization: A meta-analysis of randomized controlled trials in colorectal surgery

    DEFF Research Database (Denmark)

    Adamina, Michel; Kehlet, Henrik; Tomlinson, George A

    2011-01-01

    in costs that threatens the stability of health care systems. Enhanced recovery pathways (ERP) have been proposed as a means to reduce morbidity and improve effectiveness of care. We have reviewed the evidence supporting the implementation of ERP in clinical practice. Methods Medline, Embase...... in the development and implementation of ERP. Results A random-effect Bayesian meta-analysis was performed, including 6 randomized, controlled trials totalizing 452 patients. For patients adhering to ERP, length of stay decreased by 2.5 days (95% credible interval [CrI] -3.92 to -1.11), whereas 30-day morbidity...... of health care processes. Thus, while accelerating recovery and safely reducing hospital stay, ERPs optimize utilization of health care resources. ERPs can and should be routinely used in care after colorectal and other major gastrointestinal procedures....

  10. Application of Poisson random effect models for highway network screening.

    Science.gov (United States)

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Biomass utilization modeling on the Bitterroot National Forest

    Science.gov (United States)

    Robin P. Silverstein; Dan Loeffler; J. Greg Jones; Dave E. Calkin; Hans R. Zuuring; Martin Twer

    2006-01-01

    Utilization of small-sized wood (biomass) from forests as a potential source of renewable energy is an increasingly important aspect of fuels management on public lands as an alternative to traditional disposal methods (open burning). The potential for biomass utilization to enhance the economics of treating hazardous forest fuels was examined on the Bitterroot...

  12. Modeling non-monotone risk aversion using SAHARA utility functions

    NARCIS (Netherlands)

    Chen, A.; Pelsser, A.; Vellekoop, M.

    2011-01-01

    We develop a new class of utility functions, SAHARA utility, with the distinguishing feature that it allows absolute risk aversion to be non-monotone and implements the assumption that agents may become less risk averse for very low values of wealth. The class contains the well-known exponential and

  13. Random Multi-Hopper Model. Super-Fast Random Walks on Graphs

    OpenAIRE

    Estrada, Ernesto; Delvenne, Jean-Charles; Hatano, Naomichi; Mateos, José L.; Metzler, Ralf; Riascos ( Universidad Mariana, Pasto, Colombia), Alejandro P; Schaub, Michael T.

    2016-01-01

    We develop a model for a random walker with long-range hops on general graphs. This random multi-hopper jumps from a node to any other node in the graph with a probability that decays as a function of the shortest-path distance between the two nodes. We consider here two decaying functions in the form of the Laplace and Mellin transforms of the shortest-path distances. Remarkably, when the parameters of these transforms approach zero asymptotically, the multi-hopper's hitting times between an...

  14. Semiparametric Bayesian Estimation of Random Coefficients Discrete Choice Models

    OpenAIRE

    Tchumtchoua, Sylvie; Dey, Dipak

    2007-01-01

    Heterogeneity in choice models is typically assumed to have a normal distribution in both Bayesian and classical setups. In this paper, we propose a semiparametric Bayesian framework for the analysis of random coefficients discrete choice models that can be applied to both individual as well as aggregate data. Heterogeneity is modeled using a Dirichlet process prior which varies with consumers characteristics through covariates. We develop a Markov chain Monte Carlo algorithm for fitting such...

  15. RANDOM CLOSED SET MODELS: ESTIMATING AND SIMULATING BINARY IMAGES

    Directory of Open Access Journals (Sweden)

    Ángeles M Gallego

    2011-05-01

    Full Text Available In this paper we show the use of the Boolean model and a class of RACS models that is a generalization of it to obtain simulations of random binary images able to imitate natural textures such as marble or wood. The different tasks required, parameter estimation, goodness-of-fit test and simulation, are reviewed. In addition to a brief review of the theory, simulation studies of each model are included.

  16. Effects of random noise in a dynamical model of love

    Energy Technology Data Exchange (ETDEWEB)

    Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2011-07-15

    Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

  17. Positive random fields for modeling material stiffness and compliance

    DEFF Research Database (Denmark)

    Hasofer, Abraham Michael; Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1998-01-01

    with material properties modeled in terms of the considered random fields.The paper addsthe gamma field, the Fisher field, the beta field, and their reciprocal fields to the catalogue. These fields are all defined on the basis of sums of squares of independent standard Gaussian random variables.All the existing......Positive random fields with known marginal properties and known correlation function are not numerous in the literature. The most prominent example is the log\\-normal field for which the complete distribution is known and for which the reciprocal field is also lognormal. It is of interest...... to supplement the catalogue of positive fields beyond the class of those obtained by simple marginal transformation of a Gaussian field, this class containing the lognormal field.As a minimum for a random field to be included in the catalogue itis required that an algorithm for simulation of realizations can...

  18. Using Random Forest Models to Predict Organizational Violence

    Science.gov (United States)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  19. Blastocyst utilization rates after continuous culture in two commercial single-step media: a prospective randomized study with sibling oocytes.

    Science.gov (United States)

    Sfontouris, Ioannis A; Kolibianakis, Efstratios M; Lainas, George T; Venetis, Christos A; Petsas, George K; Tarlatzis, Basil C; Lainas, Tryfon G

    2017-10-01

    The aim of this study is to determine whether blastocyst utilization rates are different after continuous culture in two different commercial single-step media. This is a paired randomized controlled trial with sibling oocytes conducted in infertility patients, aged ≤40 years with ≥10 oocytes retrieved assigned to blastocyst culture and transfer. Retrieved oocytes were randomly allocated to continuous culture in either Sage one-step medium (Origio) or Continuous Single Culture (CSC) medium (Irvine Scientific) without medium renewal up to day 5 post oocyte retrieval. Main outcome measure was the proportion of embryos suitable for clinical use (utilization rate). A total of 502 oocytes from 33 women were randomly allocated to continuous culture in either Sage one-step medium (n = 250) or CSC medium (n = 252). Fertilization was performed by either in vitro fertilization or intracytoplasmic sperm injection, and embryo transfers were performed on day 5. Two patients had all blastocysts frozen due to the occurrence of severe ovarian hyperstimulation syndrome. Fertilization and cleavage rates, as well as embryo quality on day 3, were similar in the two media. Blastocyst utilization rates (%, 95% CI) [55.4% (46.4-64.1) vs 54.7% (44.9-64.6), p = 0.717], blastocyst formation rates [53.6% (44.6-62.5) vs 51.9 (42.2-61.6), p = 0.755], and proportion of good quality blastocysts [36.8% (28.1-45.4) vs 36.1% (27.2-45.0), p = 0.850] were similar in Sage one-step and CSC media, respectively. Continuous culture of embryos in Sage one-step and CSC media is associated with similar blastocyst development and utilization rates. Both single-step media appear to provide adequate support during in vitro preimplantation embryo development. Whether these observations are also valid for other continuous single medium protocols remains to be determined. NCT02302638.

  20. Are Discrepancies in RANS Modeled Reynolds Stresses Random?

    CERN Document Server

    Xiao, Heng; Wang, Jian-xun; Paterson, Eric G

    2016-01-01

    In the turbulence modeling community, significant efforts have been made to quantify the uncertainties in the Reynolds-Averaged Navier--Stokes (RANS) models and to improve their predictive capabilities. Of crucial importance in these efforts is the understanding of the discrepancies in the RANS modeled Reynolds stresses. However, to what extent these discrepancies can be predicted or whether they are completely random remains a fundamental open question. In this work we used a machine learning algorithm based on random forest regression to predict the discrepancies. The success of the regression--prediction procedure indicates that, to a large extent, the discrepancies in the modeled Reynolds stresses can be explained by the mean flow feature, and thus they are universal quantities that can be extrapolated from one flow to another, at least among different flows sharing the same characteristics such as separation. This finding has profound implications to the future development of RANS models, opening up new ...

  1. Buffalos milk yield analysis using random regression models

    Directory of Open Access Journals (Sweden)

    A.S. Schierholt

    2010-02-01

    Full Text Available Data comprising 1,719 milk yield records from 357 females (predominantly Murrah breed, daughters of 110 sires, with births from 1974 to 2004, obtained from the Programa de Melhoramento Genético de Bubalinos (PROMEBUL and from records of EMBRAPA Amazônia Oriental - EAO herd, located in Belém, Pará, Brazil, were used to compare random regression models for estimating variance components and predicting breeding values of the sires. The data were analyzed by different models using the Legendre’s polynomial functions from second to fourth orders. The random regression models included the effects of herd-year, month of parity date of the control; regression coefficients for age of females (in order to describe the fixed part of the lactation curve and random regression coefficients related to the direct genetic and permanent environment effects. The comparisons among the models were based on the Akaike Infromation Criterion. The random effects regression model using third order Legendre’s polynomials with four classes of the environmental effect were the one that best described the additive genetic variation in milk yield. The heritability estimates varied from 0.08 to 0.40. The genetic correlation between milk yields in younger ages was close to the unit, but in older ages it was low.

  2. Great Expectations: Cost-Utility Models as Decision Criteria

    Directory of Open Access Journals (Sweden)

    Paul C Langley

    2016-07-01

    Full Text Available One of the more puzzling features of published claims for cost-effectiveness is the popularity of claims presented in terms of quality adjusted life years (QALYs. Despite the popularity of QALYs as the ‘gold standard’ outcome measures among academic audiences, professional groups and a number of single payer health care systems, there is no evidence to suggest that cost-per-QALY based claims have ever been assessed, either through experimentation or observation, to support formulary decisions. In part this stems from the fact that cost-per-QALY claims are typically not expressed in evaluable terms; it also stems from the fact that, despite the plethora of QALY publications, QALYs are not collected on a regular basis by any health care system as part of administrative claims or electronic medical records. In the US QALYs have typically been ignored by health care decision makers. Given this, the continuing popularity of utility-based measures for studies published in the leading pharmacoeconomics journals is difficult to understand. One possible explanation is that those promoting QALY claims are locked into a relativist position that defends the publication of nontestable product claims. A position that is reinforced by recommendations from ‘peer organizations’ such as the Academy of Managed Care Pharmacy (AMCP in their promotion of their Format for Formulary Submission standards which support the role of lifetime cost-per-QALY modeled imaginary worlds or thought experiments. Another explanation is that QALYs have been taken at face value with little though given to how they might be implemented to support both initial formulary decisions as well as ongoing disease area therapeutic class reviews. The purpose of this review is to put the case that the continued emphasis on cost-per-QALY claims has no practical benefit in formulary decision making.   Type: Commentary

  3. Application of Random-Effects Probit Regression Models.

    Science.gov (United States)

    Gibbons, Robert D.; Hedeker, Donald

    1994-01-01

    Develops random-effects probit model for case in which outcome of interest is series of correlated binary responses, obtained as product of longitudinal response process where individual is repeatedly classified on binary outcome variable or in multilevel or clustered problems in which individuals within groups are considered to share…

  4. Asthma Self-Management Model: Randomized Controlled Trial

    Science.gov (United States)

    Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo

    2016-01-01

    Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…

  5. First principles modeling of magnetic random access memory devices (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Butler, W.H.; Zhang, X.; Schulthess, T.C.; Nicholson, D.M.; Oparin, A.B. [Metals and Ceramics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); MacLaren, J.M. [Department of Physics, Tulane University, New Orleans, Louisiana 70018 (United States)

    1999-04-01

    Giant magnetoresistance (GMR) and spin-dependent tunneling may be used to make magnetic random access memory devices. We have applied first-principles based electronic structure techniques to understand these effects and in the case of GMR to model the transport properties of the devices. {copyright} {ital 1999 American Institute of Physics.}

  6. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  7. Scale-free random graphs and Potts model

    Indian Academy of Sciences (India)

    real-world networks such as the world-wide web, the Internet, the coauthorship, the protein interaction networks and so on display power-law behaviors in the degree ... in this paper, we study the evolution of SF random graphs from the perspective of equilibrium statistical physics. The formulation in terms of the spin model ...

  8. Modeling fiber type grouping by a binary Markov random field

    NARCIS (Netherlands)

    Venema, H. W.

    1992-01-01

    A new approach to the quantification of fiber type grouping is presented, in which the distribution of histochemical type in a muscle cross section is regarded as a realization of a binary Markov random field (BMRF). Methods for the estimation of the parameters of this model are discussed. The first

  9. Quantum random oracle model for quantum digital signature

    Science.gov (United States)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  10. Asthma control cost-utility randomized trial evaluation (ACCURATE: the goals of asthma treatment

    Directory of Open Access Journals (Sweden)

    Honkoop Persijn J

    2011-11-01

    Full Text Available Abstract Background Despite the availability of effective therapies, asthma remains a source of significant morbidity and use of health care resources. The central research question of the ACCURATE trial is whether maximal doses of (combination therapy should be used for long periods in an attempt to achieve complete control of all features of asthma. An additional question is whether patients and society value the potential incremental benefit, if any, sufficiently to concur with such a treatment approach. We assessed patient preferences and cost-effectiveness of three treatment strategies aimed at achieving different levels of clinical control: 1. sufficiently controlled asthma 2. strictly controlled asthma 3. strictly controlled asthma based on exhaled nitric oxide as an additional disease marker Design 720 Patients with mild to moderate persistent asthma from general practices with a practice nurse, age 18-50 yr, daily treatment with inhaled corticosteroids (more then 3 months usage of inhaled corticosteroids in the previous year, will be identified via patient registries of general practices in the Leiden, Nijmegen, and Amsterdam areas in The Netherlands. The design is a 12-month cluster-randomised parallel trial with 40 general practices in each of the three arms. The patients will visit the general practice at baseline, 3, 6, 9, and 12 months. At each planned and unplanned visit to the general practice treatment will be adjusted with support of an internet-based asthma monitoring system supervised by a central coordinating specialist nurse. Patient preferences and utilities will be assessed by questionnaire and interview. Data on asthma control, treatment step, adherence to treatment, utilities and costs will be obtained every 3 months and at each unplanned visit. Differences in societal costs (medication, other (health care and productivity will be compared to differences in the number of limited activity days and in quality adjusted

  11. Utilization of random process spectral properties for the calculation of fatigue life under combined loading

    Directory of Open Access Journals (Sweden)

    Svoboda J.

    2009-12-01

    Full Text Available The contribution includes the results of experimental works aiming to find a new methodology for the calculation of fatigue life of structures subjected to operational loading from a combination of forces and moments of random character. Considering the fracture mechanics theory, then the damaging of material is both in the micro- and macro-plastic area connected with the rise of plastic deformation and hence with the plastic transformation rate which depends on the amount of supplied energy. The power spectral density (PSD indicating the power at individual frequencies in the monitored frequency band yields information about the supplied amount of energy. Therefore, it can be assumed that there is a dependence between the PSD shape and the size of damage and that the supplied power which is proportional to the value of dispersion s^2 under the PSD curve could be a new criterion for the calculation of fatigue life under combined loading. The searching for links between the spectral properties of the loading process and the fatigue life of structure under load is dealt with by new Grant GA No. 101/09/0904 of the Czech Technical University in Prague and the Institute of Thermomechanics of the Academy of Sciences of the Czech Republic, v.v.i.

  12. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  13. Evolution of the concentration PDF in random environments modeled by global random walk

    Science.gov (United States)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and

  14. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers...... seek to minimize their anticipated regret from their corrective actions. The model accounts for driver attributes and behavior, critical events that made the crash imminent, vehicle and road characteristics, and environmental conditions. Analyzed data are retrieved from the General Estimates System...... (GES) crash database for the period between 2005 and 2009. The predictive ability of the RRM-based model is slightly superior to its RUM-based counterpart, namely the multinomial logit model (MNL) model. The marginal effects predicted by the RRM-based model are greater than those predicted by the RUM...

  15. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers...... seek to minimize their anticipated regret from their corrective actions. The model accounts for driver attributes and behavior, critical events that made the crash imminent, vehicle and road characteristics, and environmental conditions. Analyzed data are retrieved from the General Estimates System...... (GES) crash database for the period between 2005 and 2009. The predictive ability of the RRM-based model is slightly superior to its RUM-based counterpart, namely the multinomial logit model (MNL) model. The marginal effects predicted by the RRM-based model are greater than those predicted by the RUM...

  16. A monoecious and diploid Moran model of random mating.

    Science.gov (United States)

    Hössjer, Ola; Tyvand, Peder A

    2016-04-07

    An exact Markov chain is developed for a Moran model of random mating for monoecious diploid individuals with a given probability of self-fertilization. The model captures the dynamics of genetic variation at a biallelic locus. We compare the model with the corresponding diploid Wright-Fisher (WF) model. We also develop a novel diffusion approximation of both models, where the genotype frequency distribution dynamics is described by two partial differential equations, on different time scales. The first equation captures the more slowly varying allele frequencies, and it is the same for the Moran and WF models. The other equation captures departures of the fraction of heterozygous genotypes from a large population equilibrium curve that equals Hardy-Weinberg proportions in the absence of selfing. It is the distribution of a continuous time Ornstein-Uhlenbeck process for the Moran model and a discrete time autoregressive process for the WF model. One application of our results is to capture dynamics of the degree of non-random mating of both models, in terms of the fixation index fIS. Although fIS has a stable fixed point that only depends on the degree of selfing, the normally distributed oscillations around this fixed point are stochastically larger for the Moran than for the WF model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Random matrices as models for the statistics of quantum mechanics

    Science.gov (United States)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  18. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  19. Parsimonious Continuous Time Random Walk Models and Kurtosis for Diffusion in Magnetic Resonance of Biological Tissue

    Directory of Open Access Journals (Sweden)

    Carson eIngo

    2015-03-01

    Full Text Available In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusionthrough novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  20. Parsimonious continuous time random walk models and kurtosis for diffusion in magnetic resonance of biological tissue.

    Science.gov (United States)

    Ingo, Carson; Sui, Yi; Chen, Yufen; Parrish, Todd B; Webb, Andrew G; Ronen, Itamar

    2015-03-01

    In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusion through novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  1. Parsimonious Continuous Time Random Walk Models and Kurtosis for Diffusion in Magnetic Resonance of Biological Tissue

    Science.gov (United States)

    Ingo, Carson; Sui, Yi; Chen, Yufen; Parrish, Todd; Webb, Andrew; Ronen, Itamar

    2015-03-01

    In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusion through novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  2. Knowledge Management Models And Their Utility To The Effective ...

    African Journals Online (AJOL)

    Although indigenous knowledge is key to the development of sub Saharan Africa and the preservation of its societal memory, it is fast disappearing due to a variety of reasons. One of the strategies that may assist in the management and preservation of indigenous knowledge is the utilization of knowledge management ...

  3. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-05-02

    May 2, 2008 ... Recently, an increasing attention is being directed toward utilization of microbial activities (pure bacteria and fungi) for the decolorization and mineralization of distillery effluent (Pant and Adholeya, 2007; Pazouki et al., 2006). Microbial species such as Bacillus megaterium, Bacillus cereus (Jain et al., 2002), ...

  4. The Sustainable Energy Utility (SEU) Model for Energy Service Delivery

    Science.gov (United States)

    Houck, Jason; Rickerson, Wilson

    2009-01-01

    Climate change, energy price spikes, and concerns about energy security have reignited interest in state and local efforts to promote end-use energy efficiency, customer-sited renewable energy, and energy conservation. Government agencies and utilities have historically designed and administered such demand-side measures, but innovative…

  5. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  6. Connectivity properties of the random-cluster model

    Science.gov (United States)

    Weigel, Martin; Metin Elci, Eren; Fytas, Nikolaos G.

    2016-02-01

    We investigate the connectivity properties of the random-cluster model mediated by bridge bonds that, if removed, lead to the generation of new connected components. We study numerically the density of bridges and the fragmentation kernel, i.e., the relative sizes of the generated fragments, and find that these quantities follow a scaling description. The corresponding scaling exponents are related to well known equilibrium critical exponents of the model. Using the Russo-Margulis formalism, we derive an exact relation between the expected density of bridges and the number of active edges. The same approach allows us to study the fluctuations in the numbers of bridges, thereby uncovering a new singularity in the random- cluster model as q clusters connected by bridges and candidate-bridges play a pivotal role. We discuss several different implementations of the necessary connectivity algorithms and assess their relative performance.

  7. The utility of Earth system Models of Intermediate Complexity

    NARCIS (Netherlands)

    Weber, S.L.

    2010-01-01

    Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

  8. Electric utility capacity expansion and energy production models for energy policy analysis

    Energy Technology Data Exchange (ETDEWEB)

    Aronson, E.; Edenburn, M.

    1997-08-01

    This report describes electric utility capacity expansion and energy production models developed for energy policy analysis. The models use the same principles (life cycle cost minimization, least operating cost dispatching, and incorporation of outages and reserve margin) as comprehensive utility capacity planning tools, but are faster and simpler. The models were not designed for detailed utility capacity planning, but they can be used to accurately project trends on a regional level. Because they use the same principles as comprehensive utility capacity expansion planning tools, the models are more realistic than utility modules used in present policy analysis tools. They can be used to help forecast the effects energy policy options will have on future utility power generation capacity expansion trends and to help formulate a sound national energy strategy. The models make renewable energy source competition realistic by giving proper value to intermittent renewable and energy storage technologies, and by competing renewables against each other as well as against conventional technologies.

  9. Generalized random sign and alert delay models for imperfect maintenance.

    Science.gov (United States)

    Dijoux, Yann; Gaudoin, Olivier

    2014-04-01

    This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.

  10. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  11. Random regret-based discrete-choice modelling: an application to healthcare.

    Science.gov (United States)

    de Bekker-Grob, Esther W; Chorus, Caspar G

    2013-07-01

    A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p < 0.05). Differences in predicted choice probabilities between RUM models and (Hybrid RUM-) RRM models were small. Derived parameter ratios did not differ significantly between model types, but trade-offs between attributes implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.

  12. Crash Frequency Analysis Using Hurdle Models with Random Effects Considering Short-Term Panel Data.

    Science.gov (United States)

    Chen, Feng; Ma, Xiaoxiang; Chen, Suren; Yang, Lin

    2016-10-26

    Random effect panel data hurdle models are established to research the daily crash frequency on a mountainous section of highway I-70 in Colorado. Road Weather Information System (RWIS) real-time traffic and weather and road surface conditions are merged into the models incorporating road characteristics. The random effect hurdle negative binomial (REHNB) model is developed to study the daily crash frequency along with three other competing models. The proposed model considers the serial correlation of observations, the unbalanced panel-data structure, and dominating zeroes. Based on several statistical tests, the REHNB model is identified as the most appropriate one among four candidate models for a typical mountainous highway. The results show that: (1) the presence of over-dispersion in the short-term crash frequency data is due to both excess zeros and unobserved heterogeneity in the crash data; and (2) the REHNB model is suitable for this type of data. Moreover, time-varying variables including weather conditions, road surface conditions and traffic conditions are found to play importation roles in crash frequency. Besides the methodological advancements, the proposed technology bears great potential for engineering applications to develop short-term crash frequency models by utilizing detailed data from field monitoring data such as RWIS, which is becoming more accessible around the world.

  13. Shape modelling using Markov random field restoration of point correspondences.

    Science.gov (United States)

    Paulsen, Rasmus R; Hilger, Klaus B

    2003-07-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semi-landmarks that are highly correlated in the shape tangent space. The method is demonstrated on a set of human ear canals extracted from 3D-laser scans.

  14. Many-body localization in the quantum random energy model

    Science.gov (United States)

    Laumann, Chris; Pal, Arijeet

    2014-03-01

    The quantum random energy model is a canonical toy model for a quantum spin glass with a well known phase diagram. We show that the model exhibits a many-body localization-delocalization transition at finite energy density which significantly alters the interpretation of the statistical ``frozen'' phase at lower temperature in isolated quantum systems. The transition manifests in many-body level statistics as well as the long time dynamics of on-site observables. CRL thanks the Perimeter Institute for hospitality and support.

  15. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized...... shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semilandmarks that are highly correlated in the shape tangent space. The method...

  16. A Random Dot Product Model for Weighted Networks

    CERN Document Server

    DeFord, Daryl R

    2016-01-01

    This paper presents a generalization of the random dot product model for networks whose edge weights are drawn from a parametrized probability distribution. We focus on the case of integer weight edges and show that many previously studied models can be recovered as special cases of this generalization. Our model also determines a dimension--reducing embedding process that gives geometric interpretations of community structure and centrality. The dimension of the embedding has consequences for the derived community structure and we exhibit a stress function for determining appropriate dimensions. We use this approach to analyze a coauthorship network and voting data from the U.S. Senate.

  17. Modeling random combustion of lycopodium particles and gas

    Directory of Open Access Journals (Sweden)

    M Bidabadi

    2016-06-01

    Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.

  18. Modeling energy flexibility of low energy buildings utilizing thermal mass

    DEFF Research Database (Denmark)

    Foteinaki, Kyriaki; Heller, Alfred; Rode, Carsten

    2016-01-01

    In the future energy system a considerable increase in the penetration of renewable energy is expected, challenging the stability of the system, as both production and consumption will have fluctuating patterns. Hence, the concept of energy flexibility will be necessary in order for the consumption...... to match the production patterns, shifting demand from on-peak hours to off-peak hours. Buildings could act as flexibility suppliers to the energy system, through load shifting potential, provided that the large thermal mass of the building stock could be utilized for energy storage. In the present study...... the load shifting potential of an apartment of a low energy building in Copenhagen is assessed, utilizing the heat storage capacity of the thermal mass when the heating system is switched off for relieving the energy system. It is shown that when using a 4-hour preheating period before switching off...

  19. SEMPATH Ontology: modeling multidisciplinary treatment schemes utilizing semantics.

    Science.gov (United States)

    Alexandrou, Dimitrios Al; Pardalis, Konstantinos V; Bouras, Thanassis D; Karakitsos, Petros; Mentzas, Gregoris N

    2012-03-01

    A dramatic increase of demand for provided treatment quality has occurred during last decades. The main challenge to be confronted, so as to increase treatment quality, is the personalization of treatment, since each patient constitutes a unique case. Healthcare provision encloses a complex environment since healthcare provision organizations are highly multidisciplinary. In this paper, we present the conceptualization of the domain of clinical pathways (CP). The SEMPATH (SEMantic PATHways) Oontology comprises three main parts: 1) the CP part; 2) the business and finance part; and 3) the quality assurance part. Our implementation achieves the conceptualization of the multidisciplinary domain of healthcare provision, in order to be further utilized for the implementation of a Semantic Web Rules (SWRL rules) repository. Finally, SEMPATH Ontology is utilized for the definition of a set of SWRL rules for the human papillomavirus) disease and its treatment scheme. © 2012 IEEE

  20. Spatially random models, estimation theory, and robot arm dynamics

    Science.gov (United States)

    Rodriguez, G.

    1987-01-01

    Spatially random models provide an alternative to the more traditional deterministic models used to describe robot arm dynamics. These alternative models can be used to establish a relationship between the methodologies of estimation theory and robot dynamics. A new class of algorithms for many of the fundamental robotics problems of inverse and forward dynamics, inverse kinematics, etc. can be developed that use computations typical in estimation theory. The algorithms make extensive use of the difference equations of Kalman filtering and Bryson-Frazier smoothing to conduct spatial recursions. The spatially random models are very easy to describe and are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. The models can also be used to generate numerically the composite multibody system inertia matrix. This is done without resorting to the more common methods of deterministic modeling involving Lagrangian dynamics, Newton-Euler equations, etc. These methods make substantial use of human knowledge in derivation and minipulation of equations of motion for complex mechanical systems.

  1. Identification of a potential fibromyalgia diagnosis using random forest modeling applied to electronic medical records

    Directory of Open Access Journals (Sweden)

    Emir B

    2015-06-01

    Full Text Available Birol Emir,1 Elizabeth T Masters,1 Jack Mardekian,1 Andrew Clair,1 Max Kuhn,2 Stuart L Silverman,3 1Pfizer Inc., New York, NY, 2Pfizer Inc., Groton, CT, 3Cedars-Sinai Medical Center, Los Angeles, CA, USA Background: Diagnosis of fibromyalgia (FM, a chronic musculoskeletal condition characterized by widespread pain and a constellation of symptoms, remains challenging and is often delayed. Methods: Random forest modeling of electronic medical records was used to identify variables that may facilitate earlier FM identification and diagnosis. Subjects aged ≥18 years with two or more listings of the International Classification of Diseases, Ninth Revision, (ICD-9 code for FM (ICD-9 729.1 ≥30 days apart during the 2012 calendar year were defined as cases among subjects associated with an integrated delivery network and who had one or more health care provider encounter in the Humedica database in calendar years 2011 and 2012. Controls were without the FM ICD-9 codes. Seventy-two demographic, clinical, and health care resource utilization variables were entered into a random forest model with downsampling to account for cohort imbalances (<1% subjects had FM. Importance of the top ten variables was ranked based on normalization to 100% for the variable with the largest loss in predicting performance by its omission from the model. Since random forest is a complex prediction method, a set of simple rules was derived to help understand what factors drive individual predictions. Results: The ten variables identified by the model were: number of visits where laboratory/non-imaging diagnostic tests were ordered; number of outpatient visits excluding office visits; age; number of office visits; number of opioid prescriptions; number of medications prescribed; number of pain medications excluding opioids; number of medications administered/ordered; number of emergency room visits; and number of musculoskeletal conditions. A receiver operating

  2. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    Science.gov (United States)

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only

  3. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Lange, Theis

    2013-01-01

    we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source......The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...

  4. Random unitary evolution model of quantum Darwinism with pure decoherence

    Science.gov (United States)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  5. Statistical Modeling of Robotic Random Walks on Different Terrain

    Science.gov (United States)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  6. Social aggregation in pea aphids: experiment and random walk modeling.

    Directory of Open Access Journals (Sweden)

    Christa Nilsen

    Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  7. A Model for the Utilization of Formative Evaluation in the Process of Developing Instructional Materials.

    Science.gov (United States)

    Nevo, David

    An applied model is presented for the utilization of formative evaluation in developing instructional materials. The model is introduced through a presentation of its conceptual rationale. A description is given of its methodology and instrumentation, and a demonstration presented of its development and utilization within the framework of a…

  8. Calibration of stormwater quality regression models: a random process?

    Science.gov (United States)

    Dembélé, A; Bertrand-Krajewski, J-L; Barillon, B

    2010-01-01

    Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.

  9. Richly parameterized linear models additive, time series, and spatial models using random effects

    CERN Document Server

    Hodges, James S

    2013-01-01

    A First Step toward a Unified Theory of Richly Parameterized Linear ModelsUsing mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects.Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The aut

  10. High-temperature series expansions for random Potts models

    Directory of Open Access Journals (Sweden)

    M.Hellmund

    2005-01-01

    Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.

  11. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    Science.gov (United States)

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  12. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  13. Random matrices and the six-vertex model

    CERN Document Server

    Bleher, Pavel

    2013-01-01

    This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...

  14. The Impact of Community Engagement on Health, Social, and Utilization Outcomes in Depressed, Impoverished Populations: Secondary Findings from a Randomized Trial.

    Science.gov (United States)

    Lam, Christine A; Sherbourne, Cathy; Tang, Lingqi; Belin, Thomas R; Williams, Pluscedia; Young-Brinn, Angela; Miranda, Jeanne; Wells, Kenneth B

    2016-01-01

    Disparities in depression care exist among the poor. Community Partners in Care (CPIC) compared a community coalition model with technical assistance to improve depression services in under-resourced communities. We examine effects on health, social, and utilization outcomes among the poor and, non-poor depressed, and poor subgroups. This study analyzed clients living above (n = 268) and below (n = 750) the federal-poverty level and, among the poor, 3 nonoverlapping subgroups: justice-involved (n = 158), homeless and not justice-involved (n = 298), and other poor (n = 294). Matched programs (n = 93) from health and community sectors were randomly assigned to community engagement and planning (CEP) or resources for services (RS). Primary outcomes were poor mental health-related quality of life and 8-item Patient Health Questionnaire scores, whereas community-prioritized and utilization outcomes were secondary. Effects were scrutinized using false discovery rate-adjusted P values to account for multiple comparisons. In the impoverished group, CEP and RS clients of participating study programs did not differ in primary outcomes, but CEP more than RS improved mental wellness among the depressed poor (unadjusted P = .004) while providing suggestive evidence for other secondary outcomes. Within the poor subgroups, evidence favoring CEP was only suggestive but was strongest among justice-involved clients. A coalition approach to improving outcomes for low-income clients with depression, particularly those involved in the justice system, may offer additional benefits over standard technical assistance programs. © Copyright 2016 by the American Board of Family Medicine.

  15. Recent advances in modeling nutrient utilization in ruminants1

    NARCIS (Netherlands)

    Kebreab, E.; Dijkstra, J.; Bannink, A.; France, J.

    2009-01-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, post-absorptive metabolism and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient

  16. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  17. Statistical Downscaling of Temperature with the Random Forest Model

    Directory of Open Access Journals (Sweden)

    Bo Pang

    2017-01-01

    Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.

  18. Quantification of Uncertainties in Turbulence Modeling: A Comparison of Physics-Based and Random Matrix Theoretic Approaches

    CERN Document Server

    Wang, Jian-Xun; Xiao, Heng

    2016-01-01

    Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with the maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. In this work, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in ...

  19. Relations between Lagrangian models and synthetic random velocity fields.

    Science.gov (United States)

    Olla, Piero; Paradisi, Paolo

    2004-10-01

    The authors propose an alternative interpretation of Markovian transport models based on the well-mixed condition, in terms of the properties of a random velocity field with second order structure functions scaling linearly in the space-time increments. This interpretation allows direct association of the drift and noise terms entering the model, with the geometry of the turbulent fluctuations. In particular, the well-known nonuniqueness problem in the well-mixed approach is solved in terms of the antisymmetric part of the velocity correlations; its relation with the presence of nonzero mean helicity and other geometrical properties of the flow is elucidated. The well-mixed condition appears to be a special case of the relation between conditional velocity increments of the random field and the one-point Eulerian velocity distribution, allowing generalization of the approach to the transport of nontracer quantities. Application to solid particle transport leads to a model satisfying, in the homogeneous isotropic turbulence case, all the conditions on the behavior of the correlation times for the fluid velocity sampled by the particles. In particular, correlation times in the gravity and in the inertia dominated case, respectively, longer and shorter than in the passive tracer case; in the gravity dominated case, correlation times longer for velocity components along gravity, than for the perpendicular ones. The model produces, in channel flow geometry, particle deposition rates in agreement with experiments.

  20. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  1. Rodent models of cardiopulmonary bypass: utility in improving perioperative outcomes

    NARCIS (Netherlands)

    de Lange, F.

    2008-01-01

    Despite advances in surgical and anesthesia techniques, subtle neurologic injury still remains an important complication after cardiac surgery. Because the causes are multifactorial and complex, research in an appropriate small animal model for cardiopulmonary bypass (CPB) is warranted. This thesis

  2. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Science.gov (United States)

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  3. Animal models of obsessive–compulsive disorder: utility and limitations

    Directory of Open Access Journals (Sweden)

    Alonso P

    2015-08-01

    Full Text Available Pino Alonso,1–4 Clara López-Solà,1–3 Eva Real,1–3 Cinto Segalàs,1–3 José Manuel Menchón1–41OCD Clinical and Research Unit, Department of Psychiatry, Hospital de Bellvitge, 2Bellvitge Biomedical Research Institute-IDIBELL, 3Centro de Investigación en Red de Salud Mental, Carlos III Health Institute, 4Department of Clinical Sciences, Bellvitge Campus, University of Barcelona, Barcelona, SpainAbstract: Obsessive–compulsive disorder (OCD is a disabling and common neuropsychiatric condition of poorly known etiology. Many attempts have been made in the last few years to develop animal models of OCD with the aim of clarifying the genetic, neurochemical, and neuroanatomical basis of the disorder, as well as of developing novel pharmacological and neurosurgical treatments that may help to improve the prognosis of the illness. The latter goal is particularly important given that around 40% of patients with OCD do not respond to currently available therapies. This article summarizes strengths and limitations of the leading animal models of OCD including genetic, pharmacologically induced, behavioral manipulation-based, and neurodevelopmental models according to their face, construct, and predictive validity. On the basis of this evaluation, we discuss that currently labeled “animal models of OCD” should be regarded not as models of OCD but, rather, as animal models of different psychopathological processes, such as compulsivity, stereotypy, or perseverance, that are present not only in OCD but also in other psychiatric or neurological disorders. Animal models might constitute a challenging approach to study the neural and genetic mechanism of these phenomena from a trans-diagnostic perspective. Animal models are also of particular interest as tools for developing new therapeutic options for OCD, with the greatest convergence focusing on the glutamatergic system, the role of ovarian and related hormones, and the exploration of new

  4. Animal models of obsessive–compulsive disorder: utility and limitations

    Science.gov (United States)

    Alonso, Pino; López-Solà, Clara; Real, Eva; Segalàs, Cinto; Menchón, José Manuel

    2015-01-01

    Obsessive–compulsive disorder (OCD) is a disabling and common neuropsychiatric condition of poorly known etiology. Many attempts have been made in the last few years to develop animal models of OCD with the aim of clarifying the genetic, neurochemical, and neuroanatomical basis of the disorder, as well as of developing novel pharmacological and neurosurgical treatments that may help to improve the prognosis of the illness. The latter goal is particularly important given that around 40% of patients with OCD do not respond to currently available therapies. This article summarizes strengths and limitations of the leading animal models of OCD including genetic, pharmacologically induced, behavioral manipulation-based, and neurodevelopmental models according to their face, construct, and predictive validity. On the basis of this evaluation, we discuss that currently labeled “animal models of OCD” should be regarded not as models of OCD but, rather, as animal models of different psychopathological processes, such as compulsivity, stereotypy, or perseverance, that are present not only in OCD but also in other psychiatric or neurological disorders. Animal models might constitute a challenging approach to study the neural and genetic mechanism of these phenomena from a trans-diagnostic perspective. Animal models are also of particular interest as tools for developing new therapeutic options for OCD, with the greatest convergence focusing on the glutamatergic system, the role of ovarian and related hormones, and the exploration of new potential targets for deep brain stimulation. Finally, future research on neurocognitive deficits associated with OCD through the use of analogous animal tasks could also provide a genuine opportunity to disentangle the complex etiology of the disorder. PMID:26346234

  5. Tritium Specific Adsorption Simulation Utilizing the OSPREY Model

    Energy Technology Data Exchange (ETDEWEB)

    Veronica Rutledge; Lawrence Tavlarides; Ronghong Lin; Austin Ladshaw

    2013-09-01

    During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University, and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. This report is discusses the development of a tritium specific adsorption model. Using the OSPREY model and integrating it with a fundamental level isotherm model developed under and experimental data provided by the NEUP grant, the tritium specific adsorption model was developed.

  6. Predictive Modeling of Defibrillation utilizing Hexahedral and Tetrahedral Finite Element Models: Recent Advances

    Science.gov (United States)

    Triedman, John K.; Jolley, Matthew; Stinstra, Jeroen; Brooks, Dana H.; MacLeod, Rob

    2008-01-01

    ICD implants may be complicated by body size and anatomy. One approach to this problem has been the adoption of creative, extracardiac implant strategies using standard ICD components. Because data on safety or efficacy of such ad hoc implant strategies is lacking, we have developed image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode locations. In this paper, we review recently published studies by our group using such models, and progress in meshing strategies to improve efficiency and visualization. Our preliminary observations predict that they may be large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs of various lead configurations are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement, but the simplified nature of current models dictates further development and validation prior to clinical or industrial utilization. PMID:18817926

  7. Role Analysis in Networks using Mixtures of Exponential Random Graph Models.

    Science.gov (United States)

    Salter-Townshend, Michael; Murphy, Thomas Brendan

    2015-06-01

    A novel and flexible framework for investigating the roles of actors within a network is introduced. Particular interest is in roles as defined by local network connectivity patterns, identified using the ego-networks extracted from the network. A mixture of Exponential-family Random Graph Models is developed for these ego-networks in order to cluster the nodes into roles. We refer to this model as the ego-ERGM. An Expectation-Maximization algorithm is developed to infer the unobserved cluster assignments and to estimate the mixture model parameters using a maximum pseudo-likelihood approximation. The flexibility and utility of the method are demonstrated on examples of simulated and real networks.

  8. Exploring the Utility of Logistic Mixed Modeling Approaches to Simultaneously Investigate Item and Testlet DIF on Testlet-based Data.

    Science.gov (United States)

    Fukuhara, Hirotaka; Paek, Insu

    2016-01-01

    This study explored the utility of logistic mixed models for the analysis of differential item functioning when item response data were testlet-based. Decomposition of differential item functioning (DIF) into item level and testlet level for the testlet-based data was introduced to separate possible sources of DIF: (1) an item, (2) a testlet, and (3) both the item and the testlet. Simulation study was conducted to investigate the performance of several logistic mixed models as well as the Mantel-Haenszel method under the conditions, in which the item-related DIF and testlet-related DIF were present simultaneously. The results revealed that a new DIF model based on a logistic mixed model with random item effects and item covariates could capture the item-related DIF and testlet-related DIF well under certain conditions.

  9. On the Utility of Island Models in Dynamic Optimization

    DEFF Research Database (Denmark)

    Lissovoi, Andrei; Witt, Carsten

    2015-01-01

    A simple island model with λ islands and migration occurring after every τ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (1+λ) EA if τ=1, i.e., migration occurs during every iteration. It is proved that even for an increased offspring population size up...... to λ=O(n1-ε), the (1+λ) EA is still not able to track the optimum of Maze. If the migration interval is increased, the algorithm is able to track the optimum even for logarithmic λ. Finally, the relationship of τ, λ, and the ability of the island model to track the optimum is investigated more closely....

  10. Evaluating the performance and utility of regional climate models

    DEFF Research Database (Denmark)

    Christensen, Jens H.; Carter, Timothy R.; Rummukainen, Markku

    2007-01-01

    This special issue of Climatic Change contains a series of research articles documenting co-ordinated work carried out within a 3-year European Union project 'Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects' (PRUDENCE). The main objective...... of the PRUDENCE project was to provide high resolution climate change scenarios for Europe at the end of the twenty-first century by means of dynamical downscaling (regional climate modelling) of global climate simulations. The first part of the issue comprises seven overarching PRUDENCE papers on: (1) the design...... of the model simulations and analyses of climate model performance, (2 and 3) evaluation and intercomparison of simulated climate changes, (4 and 5) specialised analyses of impacts on water resources and on other sectors including agriculture, ecosystems, energy, and transport, (6) investigation of extreme...

  11. Developing and utilizing controlled human models of infection.

    Science.gov (United States)

    Porter, Chad K; Louis Bourgeois, A; Frenck, Robert W; Prouty, Michael; Maier, Nicole; Riddle, Mark S

    2017-12-14

    The controlled human infection model (CHIM) to assess the efficacy of vaccines against Shigella and enterotoxigenic Escherichia coli (ETEC) has several unique features that could significantly enhance the ability to test candidate vaccines. Despite increasing interest in these models, questions remain as to how to best incorporate them into vaccine development and how to maximize results. We designed a workshop focused on CHIM as part of the Vaccines Against Shigella and ETEC (VASE) Conference. The workshop, using the World Café method, focused on; clinical outcomes, nonclinical outcomes and model standardization. Researchers with a variety of expertise and experience rotated through each focus area and discussed relevant sub-topics. The results of these discussions were presented and questions posed to guide future workshops. Clinical endpoint discussions focused on the need for harmonized definitions; optimized attack rates; difficulties of sample collection and a need for non-stool based endpoints. Nonclinical discussions centered on evolving omics-based opportunities, host predictors of susceptibility and novel characterizations of the immune response. Model standardization focused on the value of shared procedures across institutions for clinical and non-clinical endpoints as well as for strain preparation and administration and subject selection. Participants agreed CHIMs for Shigella and ETEC vaccine development could accelerate vaccine development of a promising candidate; however, it was also appreciated that variability in the model and our limited understand of the host-pathogen interaction may yield results that could negatively impact a suitable candidate. Future workshops on CHIM are needed to ensure the optimal application of these models moving forward. Copyright © 2017. Published by Elsevier Ltd.

  12. Recursive inter-generational utility in global climate risk modeling

    Energy Technology Data Exchange (ETDEWEB)

    Minh, Ha-Duong [Centre International de Recherche sur l' Environnement et le Developpement (CIRED-CNRS), 75 - Paris (France); Treich, N. [Institut National de Recherches Agronomiques (INRA-LEERNA), 31 - Toulouse (France)

    2003-07-01

    This paper distinguishes relative risk aversion and resistance to inter-temporal substitution in climate risk modeling. Stochastic recursive preferences are introduced in a stylized numeric climate-economy model using preliminary IPCC 1998 scenarios. It shows that higher risk aversion increases the optimal carbon tax. Higher resistance to inter-temporal substitution alone has the same effect as increasing the discount rate, provided that the risk is not too large. We discuss implications of these findings for the debate upon discounting and sustainability under uncertainty. (author)

  13. User-owned utility models for rural electrification

    Energy Technology Data Exchange (ETDEWEB)

    Waddle, D.

    1997-12-01

    The author discusses the history of rural electric cooperatives (REC) in the United States, and the broader question of whether such organizations can serve as a model for rural electrification in other countries. The author points out the features of such cooperatives which have given them stability and strength, and emphasizes that for success of such programs, many of these same features must be present. He definitely feels the cooperative models are not outdated, but they need strong local support, and a governmental structure which is supportive, and in particular not negative.

  14. Super Yang-Mills theory as a random matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, W. [Institute for Theoretical Physics, State University of New York, Stony Brook, New York 11794-3840 (United States)

    1995-07-15

    We generalize the Gervais-Neveu gauge to four-dimensional {ital N}=1 superspace. The model describes an {ital N}=2 super Yang-Mills theory. All chiral superfields ({ital N}=2 matter and ghost multiplets) exactly cancel to all loops. The remaining Hermitian scalar superfield (matrix) has a renormalizable massive propagator and simplified vertices. These properties are associated with {ital N}=1 supergraphs describing a superstring theory on a random lattice world sheet. We also consider all possible finite matrix models, and find they have a universal large-color limit. These could describe gravitational strings if the matrix-model coupling is fixed to unity, for exact electric-magnetic self-duality.

  15. Exponential random graph models for networks with community structure.

    Science.gov (United States)

    Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian

    2013-09-01

    Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.

  16. Utility covariances and context effects in conjoint MNP models

    NARCIS (Netherlands)

    Haaijer, M.E.; Wedel, M.; Vriens, M.; Wansbeek, T.J.

    1998-01-01

    Experimental conjoint choice analysis is among the most frequently used methods for measuring and analyzing consumer preferences. The data from such experiments have been typically analyzed with the Multinomial Legit (MNL) model. However, there are several problems associated with the standard MNL

  17. Lunar-Forming Giant Impact Model Utilizing Modern Graphics ...

    Indian Academy of Sciences (India)

    ... the vast impact parameter space to identify plausible initial conditions. This is accomplished by focusing on the three major components of planetary collisions: constant gravitational attraction, short range repulsion and energy transfer. The structure of this model makes it easily parallelizable and well-suited to harness the ...

  18. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  19. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  20. Utilizing inventory information to calibrate a landscape simulation model

    Science.gov (United States)

    Steven R. Shifley; Frank R., III Thompson; David R. Larsen; David J. Mladenoff; Eric J. Gustafson

    2000-01-01

    LANDIS is a spatially explicit model that uses mapped landscape conditions as a starting point and projects the patterns in forest vegetation that will result from alternative harvest practices, alternative fire regimes, and wind events. LANDIS was originally developed for Lake States forests, but it is capable of handling the input, output, bookkeeping, and mapping...

  1. Animal models of β-hemoglobinopathies: utility and limitations

    Directory of Open Access Journals (Sweden)

    McColl B

    2016-11-01

    Full Text Available Bradley McColl, Jim Vadolas Cell and Gene Therapy Laboratory, Murdoch Childrens Research Institute, Royal Children’s Hospital, Parkville, VIC, Australia Abstract: The structural and functional conservation of hemoglobin throughout mammals has made the laboratory mouse an exceptionally useful organism in which to study both the protein and the individual globin genes. Early researchers looked to the globin genes as an excellent model in which to examine gene regulation – bountifully expressed and displaying a remarkably consistent pattern of developmental activation and silencing. In parallel with the growth of research into expression of the globin genes, mutations within the β-globin gene were identified as the cause of the β-hemoglobinopathies such as sickle cell disease and β-thalassemia. These lines of enquiry stimulated the development of transgenic mouse models, first carrying individual human globin genes and then substantial human genomic fragments incorporating the multigenic human β-globin locus and regulatory elements. Finally, mice were devised carrying mutant human β-globin loci on genetic backgrounds deficient in the native mouse globins, resulting in phenotypes of sickle cell disease or β-thalassemia. These years of work have generated a group of model animals that display many features of the β-hemoglobinopathies and provided enormous insight into the mechanisms of gene regulation. Substantive differences in the expression of human and mouse globins during development have also come to light, revealing the limitations of the mouse model, but also providing opportunities to further explore the mechanisms of globin gene regulation. In addition, animal models of β-hemoglobinopathies have demonstrated the feasibility of gene therapy for these conditions, now showing success in human clinical trials. Such models remain in use to dissect the molecular events of globin gene regulation and to identify novel treatments based

  2. : The origins of the random walk model in financial theory

    OpenAIRE

    Walter, Christian

    2013-01-01

    Ce texte constitue le chapitre 2 de l'ouvrage Le modèle de marche au hasard en finance, de Christian Walter, à paraître chez Economica, collection " Audit, assurance, actuariat ", en juin 2013. Il est publié ici avec l'accord de l'éditeur.; Three main concerns pave the way for the birth of the random walk model in financial theory: an ethical issue with Jules Regnault (1834-1894), a scientific issue with Louis Bachelier (1870-1946) and a pratical issue with Alfred Cowles (1891-1984). Three to...

  3. Geometric Models for Isotropic Random Porous Media: A Review

    Directory of Open Access Journals (Sweden)

    Helmut Hermann

    2014-01-01

    Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.

  4. A note on modeling vehicle accident frequencies with random-parameters count models.

    Science.gov (United States)

    Anastasopoulos, Panagiotis Ch; Mannering, Fred L

    2009-01-01

    In recent years there have been numerous studies that have sought to understand the factors that determine the frequency of accidents on roadway segments over some period of time, using count data models and their variants (negative binomial and zero-inflated models). This study seeks to explore the use of random-parameters count models as another methodological alternative in analyzing accident frequencies. The empirical results show that random-parameters count models have the potential to provide a fuller understanding of the factors determining accident frequencies.

  5. ASPEN+ and economic modeling of equine waste utilization for localized hot water heating via fast pyrolysis

    Science.gov (United States)

    ASPEN Plus based simulation models have been developed to design a pyrolysis process for the on-site production and utilization of pyrolysis oil from equine waste at the Equine Rehabilitation Center at Morrisville State College (MSC). The results indicate that utilization of all available Equine Reh...

  6. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  7. RIM: A Random Item Mixture Model to Detect Differential Item Functioning

    Science.gov (United States)

    Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…

  8. RIM: A random item mixture model to detect Differential Item Functioning

    NARCIS (Netherlands)

    Frederickx, S.; Tuerlinckx, T.; de Boeck, P.; Magis, D.

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is

  9. The Application of Spreadsheet Model Based on Queuing Network to Optimize Capacity Utilization in Product Development

    OpenAIRE

    Muhammad Marsudi; Dzuraidah Abdul Wahab; Che Hasan Che Haron

    2009-01-01

    Modeling of a manufacturing system enables one to identify the effects of key design parameters on the system performance and as a result make the correct decision. This paper proposes a manufacturing system modeling approach using computer spreadsheet software, in which a static capacity planning model and stochastic queuing model are integrated. The model was used to optimize the existing system utilization in relation to product design. The model incorporates a few parameters such as utili...

  10. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  11. Modeling the utility of binaural cues for underwater sound localization.

    Science.gov (United States)

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. The utility of the random controlled trial for evaluating sexual offender treatment: the gold standard or an inappropriate strategy?

    Science.gov (United States)

    Marshall, W L; Marshall, L E

    2007-06-01

    This paper examines the scientific, practical, and ethical issues surrounding the employment of the Random Controlled Trial (RCT) in the evaluation of sexual offender treatment. Consideration of these issues leads us to conclude that the RCT design is not suitable for determining the effectiveness of sexual offender treatment. We also examine the RCT study by Marques et al. (Sexual Abuse: A Journal of Research and Treatment and Evaluation 17:79-107, 2005) that is often held up as the model for the evaluation of sexual offender treatment. We found several problems with this study that, in our opinion, reduce its relevance for deciding whether treatment is effective with these clients. Finally, we examine two alternative strategies for evaluating treatment that may allow treatment providers to more readily examine, and report, the results of their programs.

  13. UTILITY OF MECHANISTIC MODELS FOR DIRECTING ADVANCED SEPARATIONS RESEARCH & DEVELOPMENT ACTIVITIES: Electrochemically Modulated Separation Example

    Energy Technology Data Exchange (ETDEWEB)

    Schwantes, Jon M.

    2009-06-01

    The objective for this work was to demonstrate the utility of mechanistic computer models designed to simulate actinide behavior for use in efficiently and effectively directing advanced laboratory R&D activities associated with developing advanced separations methods.

  14. Auxiliary Parameter MCMC for Exponential Random Graph Models

    Science.gov (United States)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  15. A customer satisfaction model for a utility service industry

    Science.gov (United States)

    Jamil, Jastini Mohd; Nawawi, Mohd Kamal Mohd; Ramli, Razamin

    2016-08-01

    This paper explores the effect of Image, Customer Expectation, Perceived Quality and Perceived Value on Customer Satisfaction, and to investigate the effect of Image and Customer Satisfaction on Customer Loyalty of mobile phone provider in Malaysia. The result of this research is based on data gathered online from international students in one of the public university in Malaysia. Partial Least Squares Structural Equation Modeling (PLS-SEM) has been used to analyze the data that have been collected from the international students' perceptions. The results found that Image and Perceived Quality have significant impact on Customer Satisfaction. Image and Customer Satisfaction ware also found to have significantly related to Customer Loyalty. However, no significant impact has been found between Customer Expectation with Customer Satisfaction, Perceived Value with Customer Satisfaction, and Customer Expectation with Perceived Value. We hope that the findings may assist the mobile phone provider in production and promotion of their services.

  16. Viable business models for public utilities; Zukunftsfaehige Geschaeftsmodelle fuer Stadtwerke

    Energy Technology Data Exchange (ETDEWEB)

    Gebhardt, Andreas; Weiss, Claudia [Buelow und Consorten GmbH, Hamburg (Germany)

    2013-04-15

    Small suppliers are faced with mounting pressures from an increasingly complex regulatory regime and a market that rewards size. Many have been able to adapt to the new framework conditions by successively optimizing existing activities. However, when change takes hold of all stages of the value chain it is no longer enough to merely modify one's previous strategies. It rather becomes necessary to review one's business model for its sustainability, take stock of the company's competencies and set priorities along the value chain. This is where a network-oriented focussing strategy can assist in ensuring efficient delivery of services in core areas while enabling the company to present itself on the market with a full range of services.

  17. Cluster randomized trials utilizing primary care electronic health records : methodological issues in design, conduct, and analysis (eCRT Study)

    NARCIS (Netherlands)

    Gulliford, Martin C; van Staa, Tjeerd P; McDermott, Lisa; McCann, Gerard; Charlton, Judith; Dregan, Alex

    2014-01-01

    BACKGROUND: There is growing interest in conducting clinical and cluster randomized trials through electronic health records. This paper reports on the methodological issues identified during the implementation of two cluster randomized trials using the electronic health records of the Clinical

  18. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  19. Site-model utility system optimisation - Industrial case study of KKEPC

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Kentaro; Chan, Pang; Cheung, Kwok-Yuen; Sakamoto, Haruo [Process Development and Design Laboratory, Process Systems Engineering and Production Technologies Field, MCC-Group Science and Technology Research Center, Mitsubishi Chemical Corp., 1, Toho-cho, Yokkaichi, Mie 510-8530 (Japan); Ide, Kenichi [Kashima-Kita Electric Power Corporation, 16 Towada Kamisu-Machi, Kashima-Gun, Ibaraki Prefecture 314-0102 (Japan); Hui, Chi-Wai [Chemical Engineering Department, Hong Kong University of Science and Technology, Clear Water Bay (China)

    2007-11-15

    Kashima-Kita Electric Power Corporation (KKEPC) is a utility supplier, which serves the eastern part of the Kashima Industrial Area in Japan. The utility system of KKEPC involves many steam and electricity generating equipment units. Although a large number of equipment units provide the system flexibility, it induces lots of complicated internal interactions. In order to handle the trade-off between the flexibility and the interactions, all units in the utility system should be managed simultaneously. This eventually creates a large-scale site-wide optimisation problem. A powerful optimisation tool, site-model, has been therefore introduced to KKEPC for solving the complicated problem. The site-model is a linear mathematical programming model with considerations on all site-wide information of utility and material balances. It is noted that the flexibility and the efficiency of the KKEPC utility system has been enhanced after adopting the site-model. The site-model can also explore further opportunities for the future plan. In this paper, several case studies of the KKEPC utility system optimisation will be presented to demonstrate the capabilities of the site-model. (author)

  20. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).

  1. Joint modeling of ChIP-seq data via a Markov random field model

    NARCIS (Netherlands)

    Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

    Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for

  2. Micromechanical Modeling of Fiber-Reinforced Composites with Statistically Equivalent Random Fiber Distribution

    Directory of Open Access Journals (Sweden)

    Wenzhi Wang

    2016-07-01

    Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.

  3. Top-level modeling of an ALS system utilizing object-oriented techniques.

    Science.gov (United States)

    Rodriguez, L F; Kang, S; Ting, K C

    2003-01-01

    The possible configuration of an Advanced Life Support (ALS) System capable of supporting human life for long-term space missions continues to evolve as researchers investigate potential technologies and configurations. To facilitate the decision process the development of acceptable, flexible, and dynamic mathematical computer modeling tools capable of system level analysis is desirable. Object-oriented techniques have been adopted to develop a dynamic top-level model of an ALS system. This approach has several advantages; among these, object-oriented abstractions of systems are inherently modular in architecture. Thus, models can initially be somewhat simplistic, while allowing for adjustments and improvements. In addition, by coding the model in Java, the model can be implemented via the World Wide Web, greatly encouraging the utilization of the model. Systems analysis is further enabled with the utilization of a readily available backend database containing information supporting the model. The subsystem models of the ALS system model include Crew, Biomass Production, Waste Processing and Resource Recovery, Food Processing and Nutrition, and the Interconnecting Space. Each subsystem model and an overall model have been developed. Presented here is the procedure utilized to develop the modeling tool, the vision of the modeling tool, and the current focus for each of the subsystem models. c2003 COSPAR. Published by Elsevier Science Ltd. All rights reserved.

  4. Droplet localization in the random XXZ model and its manifestations

    Science.gov (United States)

    Elgart, A.; Klein, A.; Stolz, G.

    2018-01-01

    We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb–Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.

  5. [Critical of the additive model of the randomized controlled trial].

    Science.gov (United States)

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  6. Random field Ising model and community structure in complex networks

    Science.gov (United States)

    Son, S.-W.; Jeong, H.; Noh, J. D.

    2006-04-01

    We propose a method to determine the community structure of a complex network. In this method the ground state problem of a ferromagnetic random field Ising model is considered on the network with the magnetic field Bs = +∞, Bt = -∞, and Bi≠s,t=0 for a node pair s and t. The ground state problem is equivalent to the so-called maximum flow problem, which can be solved exactly numerically with the help of a combinatorial optimization algorithm. The community structure is then identified from the ground state Ising spin domains for all pairs of s and t. Our method provides a criterion for the existence of the community structure, and is applicable equally well to unweighted and weighted networks. We demonstrate the performance of the method by applying it to the Barabási-Albert network, Zachary karate club network, the scientific collaboration network, and the stock price correlation network. (Ising, Potts, etc.)

  7. Random Field Ising Models: Fractal Interfaces and their Implications

    Science.gov (United States)

    Bupathy, A.; Kumar, M.; Banerjee, V.; Puri, S.

    2017-10-01

    We use a computationally efficient graph-cut (GC) method to obtain exact ground-states of the d = 3 random field Ising model (RFIM) on simple cubic (SC), bodycentered cubic (BCC) and face-centered cubic (FCC) lattices with Gaussian, Uniform and Bimodal distributions for the disorder Δ. At small-r, the correlation function C(r; Δ) shows a cusp singularity characterised by a non-integer roughness exponent α signifying rough fractal interfaces with dimension d f = d – α. In the paramagnetic phase (Δ > Δ c ), α ≃ 0:5 for all lattice and disorder types. In the ferromagnetic phase (Δ Fractal interfaces have important implications on growth and relaxation.

  8. Method of model reduction and multifidelity models for solute transport in random layered porous media

    Science.gov (United States)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    2017-09-01

    This work presents a method of model reduction that leads to models with three solutions of increasing fidelity (multifidelity models) for solute transport in a bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the reduced model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. In contrast to the linear scaling with the correlation length and the mean velocity from macrodispersion theory, our model predicts a nonlinear and a quadratic dependence of the effective dispersion on the correlation length and the mean velocity, respectively. We observe that velocity fluctuations enhance dispersion in a nonmonotonic fashion (a stochastic spike phenomenon): The dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity (correlation). Maximum enhancement in dispersion can be obtained at a correlation length about 0.25 the size of the porous media perpendicular to flow. This information can be useful for engineering such random layered porous media. Numerical simulations are implemented to compare solutions with varying fidelity.

  9. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  10. Electronic Properties of Random Polymers: Modelling Optical Spectra of Melanins

    Science.gov (United States)

    Bochenek, Kinga; Gudowska-Nowak, Ewa

    2003-05-01

    Melanins are a group of complex pigments of biological origin, widely spread in all species from fungi to man. Among diverse types of melanins, the human melanins, eumelanins, are brown or black nitrogen-containing pigments, mostly known for their photoprotective properties in human skin. We have undertaken theoretical studies aimed to understand absorption spectra of eumelanins and their chemical precursors. The structure of the biopigment is poorly defined, although it is believed to be composed of cross-linked heteropolymers based on indolequinones. As a basic model of the eumelanin structure, we have chosen pentamers containing hydroquinones (HQ) and/or 5,6-indolequinones (IQ) and/or semiquinones (SQ) often listed as structural melanin monomers. The eumelanin oligomers have been constructed as random compositions of basic monomers and optimized for the energy of bonding. Absorption spectra of model assemblies have been calculated within the semiempirical intermediate neglect of differential overlap (INDO) approximation. Model spectrum of eumelanin has been further obtained by sum of independent spectra of singular polymers. By comparison with experimental data it is shown that the INDO/CI method manages to reproduce well characteristic properties of experimental spectrum of synthetic eumelanins.

  11. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    Energy Technology Data Exchange (ETDEWEB)

    Hwangbo, Soonho; Lee, In-Beum [POSTECH, Pohang (Korea, Republic of); Han, Jeehoon [University of Wisconsin-Madison, Madison (United States)

    2014-10-15

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network.

  12. Dynamics of a semiconductor laser with polarization-rotated feedback and its utilization for random bit generation.

    Science.gov (United States)

    Oliver, Neus; Soriano, Miguel C; Sukow, David W; Fischer, Ingo

    2011-12-01

    Chaotic semiconductor lasers have been proven attractive for fast random bit generation. To follow this strategy, simple robust systems and a systematic approach determining the required dynamical properties and most suitable conditions for this application are needed. We show that dynamics of a single mode laser with polarization-rotated feedback are optimal for random bit generation when characterized simultaneously by a broad power spectrum and low autocorrelation. We observe that successful random bit generation also is sensitive to digitization and postprocessing procedures. Applying the identified criteria, we achieve fast random bit generation rates (up to 4 Gbit/s) with minimal postprocessing. © 2011 Optical Society of America

  13. Estimation of EVA Mode Choice Model Parameters with Different Types of Utility Functions

    Directory of Open Access Journals (Sweden)

    Tomaž Maher

    2011-05-01

    Full Text Available This paper presents the estimation of nine types of utility function parameters for the application in EVA mode choice model for the city of Ljubljana, Slovenia. Four different modes (private car, public transport, bike and walking and five purposes (work, education, shopping, leisure and other were taken into consideration. This paper presents first the design of the Stated Preference survey, then a brief review of the EVA model, different types of utility functions and the estimation method. The final log-likelihood enables comparison of different types of utility functions. The results show that absolute differences in final log-likelihood among most types of utility functions are not high despite the different shapes, which implies that different functions may best describe different variables.

  14. Comparison of alternative models for estimating the cost of equity capital for electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Makhija, A.K.; Thompson, H.E.

    1984-02-01

    Five models used to estimate the cost of equity capital for electric utilities are systematically compared. The authors show the impact of model specification, data definitions, and estimation techniques on the estimates. Their search for the best model is based on reasonableness of estimates and the Pesaran-Deaton test for non-nested hypotheses. Conclusions emerging from the study are: all models explain approximately the same proportion of the variation; recognition of natural nonlinearities in the models does not lead to improvement; and no model can consistently reject the other models. 10 references, 5 figures, 11 tables.

  15. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  16. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    Science.gov (United States)

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  17. A random effect multiplicative heteroscedastic model for bacterial growth

    Directory of Open Access Journals (Sweden)

    Quinto Emiliano J

    2010-02-01

    Full Text Available Abstract Background Predictive microbiology develops mathematical models that can predict the growth rate of a microorganism population under a set of environmental conditions. Many primary growth models have been proposed. However, when primary models are applied to bacterial growth curves, the biological variability is reduced to a single curve defined by some kinetic parameters (lag time and growth rate, and sometimes the models give poor fits in some regions of the curve. The development of a prediction band (from a set of bacterial growth curves using non-parametric and bootstrap methods permits to overcome that problem and include the biological variability of the microorganism into the modelling process. Results Absorbance data from Listeria monocytogenes cultured at 22, 26, 38, and 42°C were selected under different environmental conditions of pH (4.5, 5.5, 6.5, and 7.4 and percentage of NaCl (2.5, 3.5, 4.5, and 5.5. Transformation of absorbance data to viable count data was carried out. A random effect multiplicative heteroscedastic model was considered to explain the dynamics of bacterial growth. The concept of a prediction band for microbial growth is proposed. The bootstrap method was used to obtain resamples from this model. An iterative procedure is proposed to overcome the computer intensive task of calculating simultaneous prediction intervals, along time, for bacterial growth. The bands were narrower below the inflection point (0-8 h at 22°C, and 0-5.5 h at 42°C, and wider to the right of it (from 9 h onwards at 22°C, and from 7 h onwards at 42°C. A wider band was observed at 42°C than at 22°C when the curves reach their upper asymptote. Similar bands have been obtained for 26 and 38°C. Conclusions The combination of nonparametric models and bootstrap techniques results in a good procedure to obtain reliable prediction bands in this context. Moreover, the new iterative algorithm proposed in this paper allows one to

  18. Utility and Cost-Effectiveness of Motivational Messaging to Increase Survey Response in Physicians: A Randomized Controlled Trial

    Science.gov (United States)

    Chan, Randolph C. H.; Mak, Winnie W. S.; Pang, Ingrid H. Y.; Wong, Samuel Y. S.; Tang, Wai Kwong; Lau, Joseph T. F.; Woo, Jean; Lee, Diana T. F.; Cheung, Fanny M.

    2018-01-01

    The present study examined whether, when, and how motivational messaging can boost the response rate of postal surveys for physicians based on Higgin's regulatory focus theory, accounting for its cost-effectiveness. A three-arm, blinded, randomized controlled design was used. A total of 3,270 doctors were randomly selected from the registration…

  19. A BAYESIAN HIERARCHICAL SPATIAL MODEL FOR DENTAL CARIES ASSESSMENT USING NON-GAUSSIAN MARKOV RANDOM FIELDS.

    Science.gov (United States)

    Jin, Ick Hoon; Yuan, Ying; Bandyopadhyay, Dipankar

    2016-01-01

    Research in dental caries generates data with two levels of hierarchy: that of a tooth overall and that of the different surfaces of the tooth. The outcomes often exhibit spatial referencing among neighboring teeth and surfaces, i.e., the disease status of a tooth or surface might be influenced by the status of a set of proximal teeth/surfaces. Assessments of dental caries (tooth decay) at the tooth level yield binary outcomes indicating the presence/absence of teeth, and trinary outcomes at the surface level indicating healthy, decayed, or filled surfaces. The presence of these mixed discrete responses complicates the data analysis under a unified framework. To mitigate complications, we develop a Bayesian two-level hierarchical model under suitable (spatial) Markov random field assumptions that accommodates the natural hierarchy within the mixed responses. At the first level, we utilize an autologistic model to accommodate the spatial dependence for the tooth-level binary outcomes. For the second level and conditioned on a tooth being non-missing, we utilize a Potts model to accommodate the spatial referencing for the surface-level trinary outcomes. The regression models at both levels were controlled for plausible covariates (risk factors) of caries, and remain connected through shared parameters. To tackle the computational challenges in our Bayesian estimation scheme caused due to the doubly-intractable normalizing constant, we employ a double Metropolis-Hastings sampler. We compare and contrast our model performances to the standard non-spatial (naive) model using a small simulation study, and illustrate via an application to a clinical dataset on dental caries.

  20. Radiation breakage of DNA: a model based on random-walk chromatin structure

    Science.gov (United States)

    Ponomarev, A. L.; Sachs, R. K.

    2001-01-01

    Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.

  1. Conditional random field modelling of interactions between findings in mammography

    Science.gov (United States)

    Kooi, Thijs; Mordang, Jan-Jurre; Karssemeijer, Nico

    2017-03-01

    Recent breakthroughs in training deep neural network architectures, in particular deep Convolutional Neural Networks (CNNs), made a big impact on vision research and are increasingly responsible for advances in Computer Aided Diagnosis (CAD). Since many natural scenes and medical images vary in size and are too large to feed to the networks as a whole, two stage systems are typically employed, where in the first stage, small regions of interest in the image are located and presented to the network as training and test data. These systems allow us to harness accurate region based annotations, making the problem easier to learn. However, information is processed purely locally and context is not taken into account. In this paper, we present preliminary work on the employment of a Conditional Random Field (CRF) that is trained on top the CNN to model contextual interactions such as the presence of other suspicious regions, for mammography CAD. The model can easily be extended to incorporate other sources of information, such as symmetry, temporal change and various patient covariates and is general in the sense that it can have application in other CAD problems.

  2. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys.

    Directory of Open Access Journals (Sweden)

    Jussi Jousimo

    Full Text Available Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov-Malyshev-Pereleshin (FMP estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method.

  3. Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features

    Directory of Open Access Journals (Sweden)

    Sirinoot Boonsuk

    2014-01-01

    Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.

  4. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  5. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    Science.gov (United States)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  6. Estimating a DIF decomposition model using a random-weights linear logistic test model approach.

    Science.gov (United States)

    Paek, Insu; Fukuhara, Hirotaka

    2015-09-01

    A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.

  7. Solvable random-walk model with memory and its relations with Markovian models of anomalous diffusion

    Science.gov (United States)

    Boyer, D.; Romo-Cruz, J. C. R.

    2014-10-01

    Motivated by studies on the recurrent properties of animal and human mobility, we introduce a path-dependent random-walk model with long-range memory for which not only the mean-square displacement (MSD) but also the propagator can be obtained exactly in the asymptotic limit. The model consists of a random walker on a lattice, which, at a constant rate, stochastically relocates at a site occupied at some earlier time. This time in the past is chosen randomly according to a memory kernel, whose temporal decay can be varied via an exponent parameter. In the weakly non-Markovian regime, memory reduces the diffusion coefficient from the bare value. When the mean backward jump in time diverges, the diffusion coefficient vanishes and a transition to an anomalous subdiffusive regime occurs. Paradoxically, at the transition, the process is an anticorrelated Lévy flight. Although in the subdiffusive regime the model exhibits some features of the continuous time random walk with infinite mean waiting time, it belongs to another universality class. If memory is very long-ranged, a second transition takes place to a regime characterized by a logarithmic growth of the MSD with time. In this case the process is asymptotically Gaussian and effectively described as a scaled Brownian motion with a diffusion coefficient decaying as 1 /t .

  8. Force Limited Random Vibration Test of TESS Camera Mass Model

    Science.gov (United States)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  9. Mendelian Randomization versus Path Models: Making Causal Inferences in Genetic Epidemiology.

    Science.gov (United States)

    Ziegler, Andreas; Mwambi, Henry; König, Inke R

    2015-01-01

    The term Mendelian randomization is popular in the current literature. The first aim of this work is to describe the idea of Mendelian randomization studies and the assumptions required for drawing valid conclusions. The second aim is to contrast Mendelian randomization and path modeling when different 'omics' levels are considered jointly. We define Mendelian randomization as introduced by Katan in 1986, and review its crucial assumptions. We introduce path models as the relevant additional component to the current use of Mendelian randomization studies in 'omics'. Real data examples for the association between lipid levels and coronary artery disease illustrate the use of path models. Numerous assumptions underlie Mendelian randomization, and they are difficult to be fulfilled in applications. Path models are suitable for investigating causality, and they should not be mixed up with the term Mendelian randomization. In many applications, path modeling would be the appropriate analysis in addition to a simple Mendelian randomization analysis. Mendelian randomization and path models use different concepts for causal inference. Path modeling but not simple Mendelian randomization analysis is well suited to study causality with different levels of 'omics' data. 2015 S. Karger AG, Basel.

  10. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available...

  11. Three-dimensional analysis of extrusion process utilizing the physical modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Sofuoglu, H. (Technical Univ., Trabzon (Turkey)); Rasty, J. (Texas Tech Univ., Lubbock (United States))

    1993-03-01

    The purpose of this study was to simulate the metal extrusion processes via three-dimensional physical modeling technique. Plasticine was utilized as the modeling material, while plexiglass was incorporated in the design and fabrication of a lab scale extrusion apparatus. The extrusion setup was designed to accommodate dies of different semi-cone angle while also making it possible to change the extrusion ratio R. Cylindrical billets were prepared utilizing alternating layers of two colors of plasticine. Extrusion of cylindrical billets was conducted at three different reduction ratios and three different die angles for each reduction ratio. Dissection of the extruded billets along a centroidal plane revealed the internal deformation patterns which were subsequently utilized for determining the effect of the die angle and extrusion ratio on the state of strain in the final product as well as the required extrusion loads.

  12. Estimating health state utility values from discrete choice experiments--a QALY space model approach.

    Science.gov (United States)

    Gu, Yuanyuan; Norman, Richard; Viney, Rosalie

    2014-09-01

    Using discrete choice experiments (DCEs) to estimate health state utility values has become an important alternative to the conventional methods of Time Trade-Off and Standard Gamble. Studies using DCEs have typically used the conditional logit to estimate the underlying utility function. The conditional logit is known for several limitations. In this paper, we propose two types of models based on the mixed logit: one using preference space and the other using quality-adjusted life year (QALY) space, a concept adapted from the willingness-to-pay literature. These methods are applied to a dataset collected using the EQ-5D. The results showcase the advantages of using QALY space and demonstrate that the preferred QALY space model provides lower estimates of the utility values than the conditional logit, with the divergence increasing with worsening health states. Copyright © 2014 John Wiley & Sons, Ltd.

  13. A Note on Local Stability Conditions for Two Types of Monetary Models with Recursive Utility

    Science.gov (United States)

    Miyazaki, Kenji; Utsunomiya, Hitoshi

    2009-09-01

    This note explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility. Although Chen et al. [Chen, B.-L., M. Hsu, and C.-H. Lin, 2008, Inflation and growth: impatience and a qualitative equivalent, Journal of Money, Credit, and Banking, Vol. 40, No. 6, 1310-1323] investigated the relationship between inflation and growth in MIUF and TC models with recursive utility, they conducted only a comparative static analysis in a steady state. By establishing sufficient conditions for local stability, this note proves that impatience should be increasing in consumption and real balances. Increasing impatience, although less plausible from an empirical point of view, receives more support from a theoretical viewpoint.

  14. MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION

    Directory of Open Access Journals (Sweden)

    M. Ahmadlou

    2016-06-01

    Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

  15. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  16. The Utility of the UTAUT Model in Explaining Mobile Learning Adoption in Higher Education in Guyana

    Science.gov (United States)

    Thomas, Troy Devon; Singh, Lenandlar; Gaffar, Kemuel

    2013-01-01

    In this paper, we compare the utility of modified versions of the unified theory of acceptance and use of technology (UTAUT) model in explaining mobile learning adoption in higher education in a developing country and evaluate the size and direction of the impacts of the UTAUT factors on behavioural intention to adopt mobile learning in higher…

  17. Utilizing the PREPaRE Model When Multiple Classrooms Witness a Traumatic Event

    Science.gov (United States)

    Bernard, Lisa J.; Rittle, Carrie; Roberts, Kathy

    2011-01-01

    This article presents an account of how the Charleston County School District responded to an event by utilizing the PREPaRE model (Brock, et al., 2009). The acronym, PREPaRE, refers to a range of crisis response activities: P (prevent and prepare for psychological trauma), R (reaffirm physical health and perceptions of security and safety), E…

  18. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    Science.gov (United States)

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).

  19. Utility Analysis Models for Productivity Improvement Programs Affecting Work Group Composition.

    Science.gov (United States)

    Boudreau, John W.

    Utility analysis offers human resource management a powerful framework for decision making. Previous research has indicated that this framework can provide dollar-valued estimates of the consequences of human resource decisions. Moreover, this framework provides a general model of decision costs and benefits that can help organize and integrate…

  20. The Dynamics of Mobile Learning Utilization in Vocational Education: Frame Model Perspective Review

    Science.gov (United States)

    Mahande, Ridwan Daud; Susanto, Adhi; Surjono, Herman Dwi

    2017-01-01

    This study aimed to describe the dynamics of content aspects, user aspects and social aspects of mobile learning utilization (m-learning) in vocational education from the FRAME Model perspective review. This study was quantitative descriptive research. The population in this study was teachers and students of state vocational school and private…

  1. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  2. A Cost-Utility Model of Care for Peristomal Skin Complications

    OpenAIRE

    Neil, Nancy; Inglese, Gary; Manson, Andrea; Townshend, Arden

    2016-01-01

    PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with...

  3. Utility of Social Modeling in Assessment of a State’s Propensity for Nuclear Proliferation

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-06-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  4. Genetic Parameters for Milk Yield and Lactation Persistency Using Random Regression Models in Girolando Cattle

    Directory of Open Access Journals (Sweden)

    Ali William Canaza-Cayo

    2015-10-01

    Full Text Available A total of 32,817 test-day milk yield (TDMY records of the first lactation of 4,056 Girolando cows daughters of 276 sires, collected from 118 herds between 2000 and 2011 were utilized to estimate the genetic parameters for TDMY via random regression models (RRM using Legendre’s polynomial functions whose orders varied from 3 to 5. In addition, nine measures of persistency in milk yield (PSi and the genetic trend of 305-day milk yield (305MY were evaluated. The fit quality criteria used indicated RRM employing the Legendre’s polynomial of orders 3 and 5 for fitting the genetic additive and permanent environment effects, respectively, as the best model. The heritability and genetic correlation for TDMY throughout the lactation, obtained with the best model, varied from 0.18 to 0.23 and from −0.03 to 1.00, respectively. The heritability and genetic correlation for persistency and 305MY varied from 0.10 to 0.33 and from −0.98 to 1.00, respectively. The use of PS7 would be the most suitable option for the evaluation of Girolando cattle. The estimated breeding values for 305MY of sires and cows showed significant and positive genetic trends. Thus, the use of selection indices would be indicated in the genetic evaluation of Girolando cattle for both traits.

  5. Treatment of advanced Parkinson's disease in the United States: a cost-utility model.

    Science.gov (United States)

    Groenendaal, Huybert; Tarrants, Marcy L; Armand, Christophe

    2010-01-01

    As Parkinson's disease (PD) progresses, patients and their families experience substantial health and economic burdens. Because motor fluctuations (also called 'off-time') are linked to poor quality of life and higher healthcare costs, minimizing off-time is an effective strategy for reducing costs associated with PD. To assess the cost utility of rasagiline or entacapone as adjunctive therapies to levodopa versus levodopa/carbidopa/entacapone (LCE) versus standard levodopa monotherapy in patients with advanced PD and motor fluctuations in the US. A 2-year stochastic Markov model was utilized to examine the cost effectiveness of treatments of advanced PD. The model assumed that patients transition health status every 4 months. Transition probabilities, including uncertainties, were estimated from clinical trial data. Medical costs, daily drug costs and utility weights were obtained from published literature. Over 2 years, all therapy options showed greater effectiveness than levodopa alone. Rasagiline+levodopa and LCE were cost saving from a payor perspective, while entacapone+levodopa was cost saving from a societal perspective. Mean benefits over 2 years were 0.12 (90% credibility interval [CI] 0.07, 0.18) additional quality-adjusted life-years (QALYs) for rasagiline+levodopa, entacapone+levodopa and LCE, 5.08 (90% CI 3.87, 6.28) additional months with treatment of advanced PD patients. Results from this cost-utility model and prior adjunctive clinical data provide ongoing support for the adjunctive use of rasagiline in advanced PD patients with motor fluctuations.

  6. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  7. A Random Matrix Approach for Quantifying Model-Form Uncertainties in Turbulence Modeling

    CERN Document Server

    Xiao, Heng; Ghanem, Roger G

    2016-01-01

    With the ever-increasing use of Reynolds-Averaged Navier--Stokes (RANS) simulations in mission-critical applications, the quantification of model-form uncertainty in RANS models has attracted attention in the turbulence modeling community. Recently, a physics-based, nonparametric approach for quantifying model-form uncertainty in RANS simulations has been proposed, where Reynolds stresses are projected to physically meaningful dimensions and perturbations are introduced only in the physically realizable limits. However, a challenge associated with this approach is to assess the amount of information introduced in the prior distribution and to avoid imposing unwarranted constraints. In this work we propose a random matrix approach for quantifying model-form uncertainties in RANS simulations with the realizability of the Reynolds stress guaranteed. Furthermore, the maximum entropy principle is used to identify the probability distribution that satisfies the constraints from available information but without int...

  8. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope.

    Science.gov (United States)

    Quan, Wei; Lv, Lin; Liu, Baiqi

    2014-11-01

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  9. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    Energy Technology Data Exchange (ETDEWEB)

    Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  10. Random regression models in the evaluation of the growth curve of Simbrasil beef cattle

    NARCIS (Netherlands)

    Mota, M.; Marques, F.A.; Lopes, P.S.; Hidalgo, A.M.

    2013-01-01

    Random regression models were used to estimate the types and orders of random effects of (co)variance functions in the description of the growth trajectory of the Simbrasil cattle breed. Records for 7049 animals totaling 18,677 individual weighings were submitted to 15 models from the third to the

  11. Technology diffusion in hospitals : A log odds random effects regression model

    NARCIS (Netherlands)

    Blank, J.L.T.; Valdmanis, V.G.

    2013-01-01

    This study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to describe the

  12. Technology diffusion in hospitals: A log odds random effects regression model

    NARCIS (Netherlands)

    J.L.T. Blank (Jos); V.G. Valdmanis (Vivian G.)

    2015-01-01

    textabstractThis study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to

  13. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  14. Circumferential fusion is dominant over posterolateral fusion in a long-term perspective: cost-utility evaluation of a randomized controlled trial in severe, chronic low back pain

    DEFF Research Database (Denmark)

    Soegaard, Rikke; Bünger, Cody E; Christiansen, Terkel

    2007-01-01

    STUDY DESIGN: Cost-utility evaluation of a randomized, controlled trial with a 4- to 8-year follow-up. OBJECTIVE: To investigate the incremental cost per quality-adjusted-life-year (QALY) when comparing circumferential fusion to posterolateral fusion in a long-term, societal perspective. SUMMARY...... OF BACKGROUND DATA: The cost-effectiveness of circumferential fusion in a long-term perspective is uncertain but nonetheless highly relevant as the ISSLS prize winner 2006 in clinical studies reported the effect of circumferential fusion superior to the effect of posterolateral fusion. A recent trial found...... no significant difference between posterolateral and circumferential fusion reporting cost-effectiveness from a 2-year viewpoint. METHODS: A total of 146 patients were randomized to posterolateral or circumferential fusion and followed 4 to 8 years after surgery. The mean age of the cohort was 46 years (range...

  15. Music Therapy’s Effects on Mexican Migrant Farmworkers’ Levels of Depression, Anxiety and Social Isolation: A Mixed Methods Randomized Control Trial Utilizing Participatory Action Research

    DEFF Research Database (Denmark)

    Swantes, Melody

    2011-01-01

    methods approach incorporating a randomized control trial with repeated measures and participatory action research. A total of 125 farmwokers participated in this study over the course of two distinct phases. Farmworkers in Phase I were randomly assigned to music therapy, English as a second language...... are not able to meet the needs in culturally sensitive ways presented by this population. The purpose of this study was to examine the effects of music therapy on Mexican farmworkers’ levels of depression, anxiety, and social isolation. In addition, this study sought to examine how the migrant farmworkers used...... music-making sessions between music therapy sessions as a coping skill to further improve their overall mental health. Finally, this study sought to examine how migrant farmworkers engaged in the research process and how they valued their relationship with the researcher. This study utilized a mixed...

  16. Modeling Substrate Utilization, Metabolite Production, and Uranium Immobilization in Shewanella oneidensis Biofilms

    Directory of Open Access Journals (Sweden)

    Ryan S. Renslow

    2017-06-01

    Full Text Available In this study, we developed a two-dimensional mathematical model to predict substrate utilization and metabolite production rates in Shewanella oneidensis MR-1 biofilm in the presence and absence of uranium (U. In our model, lactate and fumarate are used as the electron donor and the electron acceptor, respectively. The model includes the production of extracellular polymeric substances (EPS. The EPS bound to the cell surface and distributed in the biofilm were considered bound EPS (bEPS and loosely associated EPS (laEPS, respectively. COMSOL® Multiphysics finite element analysis software was used to solve the model numerically (model file provided in the Supplementary Material. The input variables of the model were the lactate, fumarate, cell, and EPS concentrations, half saturation constant for fumarate, and diffusion coefficients of the substrates and metabolites. To estimate unknown parameters and calibrate the model, we used a custom designed biofilm reactor placed inside a nuclear magnetic resonance (NMR microimaging and spectroscopy system and measured substrate utilization and metabolite production rates. From these data we estimated the yield coefficients, maximum substrate utilization rate, half saturation constant for lactate, stoichiometric ratio of fumarate and acetate to lactate and stoichiometric ratio of succinate to fumarate. These parameters are critical to predicting the activity of biofilms and are not available in the literature. Lastly, the model was used to predict uranium immobilization in S. oneidensis MR-1 biofilms by considering reduction and adsorption processes in the cells and in the EPS. We found that the majority of immobilization was due to cells, and that EPS was less efficient at immobilizing U. Furthermore, most of the immobilization occurred within the top 10 μm of the biofilm. To the best of our knowledge, this research is one of the first biofilm immobilization mathematical models based on experimental

  17. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  18. Modeling and simulation of electronic structure, material interface and random doping in nano electronic devices

    Science.gov (United States)

    Chen, Duan; Wei, Guo-Wei

    2010-01-01

    The miniaturization of nano-scale electronic devices, such as metal oxide semiconductor field effect transistors (MOSFETs), has given rise to a pressing demand in the new theoretical understanding and practical tactic for dealing with quantum mechanical effects in integrated circuits. Modeling and simulation of this class of problems have emerged as an important topic in applied and computational mathematics. This work presents mathematical models and computational algorithms for the simulation of nano-scale MOSFETs. We introduce a unified two-scale energy functional to describe the electrons and the continuum electrostatic potential of the nano-electronic device. This framework enables us to put microscopic and macroscopic descriptions in an equal footing at nano scale. By optimization of the energy functional, we derive consistently-coupled Poisson-Kohn-Sham equations. Additionally, layered structures are crucial to the electrostatic and transport properties of nano transistors. A material interface model is proposed for more accurate description of the electrostatics governed by the Poisson equation. Finally, a new individual dopant model that utilizes the Dirac delta function is proposed to understand the random doping effect in nano electronic devices. Two mathematical algorithms, the matched interface and boundary (MIB) method and the Dirichlet-to-Neumann mapping (DNM) technique, are introduced to improve the computational efficiency of nano-device simulations. Electronic structures are computed via subband decomposition and the transport properties, such as the I-V curves and electron density, are evaluated via the non-equilibrium Green's functions (NEGF) formalism. Two distinct device configurations, a double-gate MOSFET and a four-gate MOSFET, are considered in our three-dimensional numerical simulations. For these devices, the current fluctuation and voltage threshold lowering effect induced by the discrete dopant model are explored. Numerical convergence

  19. On the Path to SunShot - Utility Regulatory Business Model Reforms forAddressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-05-01

    Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.

  20. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Science.gov (United States)

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  1. Seeing the forest for the trees: utilizing modified random forests imputation of forest plot data for landscape-level analyses

    Science.gov (United States)

    Karin L. Riley; Isaac C. Grenfell; Mark A. Finney

    2015-01-01

    Mapping the number, size, and species of trees in forests across the western United States has utility for a number of research endeavors, ranging from estimation of terrestrial carbon resources to tree mortality following wildfires. For landscape fire and forest simulations that use the Forest Vegetation Simulator (FVS), a tree-level dataset, or “tree list”, is a...

  2. Modeling Health State Utility Values in Ankylosing Spondylitis: Comparisons of Direct and Indirect Methods.

    Science.gov (United States)

    Wailoo, Allan; Hernández, Monica; Philips, Ceri; Brophy, Sinead; Siebert, Stefan

    2015-06-01

    Cost-effectiveness analyses of technologies for patients with ankylosing spondylitis frequently require estimates of health utilities as a function of the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) and the Bath Ankylosing Spondylitis Functional Index (BASFI). Linear regression, bespoke mixture models, and generalized ordered probit models were used to model the EuroQol five-dimensional questionnaire as a function of BASDAI and BASFI. Data were drawn from a large UK cohort study (n = 516 with up to five observations) spanning the full range of disease severity. Linear regression was systematically biased. Three- and four-component mixture models and generalized probit models exhibit no such bias and improved fit to the data. The mean, median, mean error, and mean absolute error favored the mixture model approach. Root mean square error favored the generalized ordered probit model approach for the data as a whole. Model fit assessed using these same measures by disease severity quartiles tended to be best using the mixture models. The value of moving from good to poor health may differ substantially according to the chosen method. Simulated data from the mixture and probit models yield a very similar distribution to the original data set. These results add to a body of evidence that the statistical model used to estimate health utilities matters. Linear models are not appropriate. The four-class bespoke mixture model approach provides the best performing method to estimate the EuroQol five-dimensional questionnaire values from BASDAI and BASFI. Copyright © 2015. Published by Elsevier Inc.

  3. Federal and State Structures to Support Financing Utility-Scale Solar Projects and the Business Models Designed to Utilize Them

    Energy Technology Data Exchange (ETDEWEB)

    Mendelsohn, M.; Kreycik, C.

    2012-04-01

    Utility-scale solar projects have grown rapidly in number and size over the last few years, driven in part by strong renewable portfolio standards (RPS) and federal incentives designed to stimulate investment in renewable energy technologies. This report provides an overview of such policies, as well as the project financial structures they enable, based on industry literature, publicly available data, and questionnaires conducted by the National Renewable Energy Laboratory (NREL).

  4. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  5. Regressor and random-effects dependencies in multilevel models

    NARCIS (Netherlands)

    Ebbes, P.; Bockenholt, U; Wedel, M.

    The objectives of this paper are (1) to review methods that can be used to test for different types of random effects and regressor dependencies, (2) to present results from Monte Carlo studies designed to investigate the performance of these methods, and (3) to discuss estimation methods that can

  6. Scale-free random graphs and Potts model

    Indian Academy of Sciences (India)

    We introduce a simple algorithm that constructs scale-free random graphs efficiently: each vertex has a prescribed weight − (0 < < 1) and an edge can connect vertices and with rate . Corresponding equilibrium ensemble is identified and the problem is solved by the → 1 limit of the -state Potts ...

  7. Random walk models of large-scale structure

    Indian Academy of Sciences (India)

    Abstract. This paper describes the insights gained from the excursion set approach, in which vari- ous questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is ...

  8. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    Science.gov (United States)

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  9. BOX-COX transformation and random regression models for fecal egg count data

    Directory of Open Access Journals (Sweden)

    Marcos Vinicius Silva

    2012-01-01

    Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  10. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  11. Utilizing neural networks in magnetic media modeling and field computation: A review.

    Science.gov (United States)

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2014-11-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  12. Utilizing neural networks in magnetic media modeling and field computation: A review

    Directory of Open Access Journals (Sweden)

    Amr A. Adly

    2014-11-01

    Full Text Available Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  13. Cluster randomized trials utilizing primary care electronic health records: methodological issues in design, conduct, and analysis (eCRT Study).

    Science.gov (United States)

    Gulliford, Martin C; van Staa, Tjeerd P; McDermott, Lisa; McCann, Gerard; Charlton, Judith; Dregan, Alex

    2014-06-11

    There is growing interest in conducting clinical and cluster randomized trials through electronic health records. This paper reports on the methodological issues identified during the implementation of two cluster randomized trials using the electronic health records of the Clinical Practice Research Datalink (CPRD). Two trials were completed in primary care: one aimed to reduce inappropriate antibiotic prescribing for acute respiratory infection; the other aimed to increase physician adherence with secondary prevention interventions after first stroke. The paper draws on documentary records and trial datasets to report on the methodological experience with respect to research ethics and research governance approval, general practice recruitment and allocation, sample size calculation and power, intervention implementation, and trial analysis. We obtained research governance approvals from more than 150 primary care organizations in England, Wales, and Scotland. There were 104 CPRD general practices recruited to the antibiotic trial and 106 to the stroke trial, with the target number of practices being recruited within six months. Interventions were installed into practice information systems remotely over the internet. The mean number of participants per practice was 5,588 in the antibiotic trial and 110 in the stroke trial, with the coefficient of variation of practice sizes being 0.53 and 0.56 respectively. Outcome measures showed substantial correlations between the 12 months before, and after intervention, with coefficients ranging from 0.42 for diastolic blood pressure to 0.91 for proportion of consultations with antibiotics prescribed, defining practice and participant eligibility for analysis requires careful consideration. Cluster randomized trials may be performed efficiently in large samples from UK general practices using the electronic health records of a primary care database. The geographical dispersal of trial sites presents a difficulty for

  14. A randomized, controlled trial to assess short-term black pepper consumption on 24-hour energy expenditure and substrate utilization

    OpenAIRE

    Annalouise O’Connor; Corbin, Karen D.; Nieman, David C.; Swick, Andrew G.

    2013-01-01

    ABSTRACTBackground: Thermogenic ingredients may play a role in weight management. In vitro and rodent work suggests that components of black pepper may impact energy expenditure, and in humans, other TPRV1 agonists e.g. capsaicin, augment EE. Objectives: To determine the impact of BP on 24-hour EE, respiratory quotient, and biochemical markers of metabolism and satiety, a randomized, controlled, cross-over study of black pepper (0.5mg/meal) versus no pepper control was conducted in post-menop...

  15. Fuzzy Field Theory as a Random Matrix Model

    Science.gov (United States)

    Tekel, Juraj

    This dissertation considers the theory of scalar fields on fuzzy spaces from the point of view of random matrices. First we define random matrix ensembles, which are natural description of such theory. These ensembles are new and the novel feature is a presence of kinetic term in the probability measure, which couples the random matrix to a set of external matrices and thus breaks the original symmetry. Considering the case of a free field ensemble, which is generalization of a Gaussian matrix ensemble, we develop a technique to compute expectation values of the observables of the theory based on explicit Wick contractions and we write down recursion rules for these. We show that the eigenvalue distribution of the random matrix follows the Wigner semicircle distribution with a rescaled radius. We also compute distributions of the matrix Laplacian of the random matrix given by the new term and demonstrate that the eigenvalues of these two matrices are correlated. We demonstrate the robustness of the method by computing expectation values and distributions for more complicated observables. We then consider the ensemble corresponding to an interacting field theory, with a quartic interaction. We use the same method to compute the distribution of the eigenvalues and show that the presence of the kinetic terms rescales the distribution given by the original theory, which is a polynomially deformed Wigner semicircle. We compute the eigenvalue distribution of the matrix Laplacian and the joint distribution up to second order in the correlation and we show that the correlation between the two changes from the free field case. Finally, as an application of these results, we compute the phase diagram of the fuzzy scalar field theory, we find multiscaling which stabilizes this diagram in the limit of large matrices and compare it with the results obtained numerically and by considering the kinetic part as a perturbation.

  16. Analysis of time to event outcomes in randomized controlled trials by generalized additive models.

    Directory of Open Access Journals (Sweden)

    Christos Argyropoulos

    Full Text Available Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and

  17. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    Science.gov (United States)

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  18. Utilizing neural networks in magnetic media modeling and field computation: A review

    OpenAIRE

    Amr A. Adly; Abd-El-Hafiz, Salwa K.

    2013-01-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in ma...

  19. Theoretical and numerical analysis of a heat pump model utilizing Dufour effect

    Science.gov (United States)

    Hoshina, Minoru; Okuda, Koji

    2017-03-01

    A heat pump model utilizing the Dufour effect is proposed, and studied by numerical and theoretical analysis. Numerically, we perform MD simulations of this system and measure the cooling power and the coefficient of performance (COP) as figures of merit. Theoretically, we calculate the cooling power and the COP from the phenomenological equations describing this system by using the linear irreversible thermodynamics and compare the theoretical results with the MD results.

  20. Changes in fibrinogen availability and utilization in an animal model of traumatic coagulopathy

    DEFF Research Database (Denmark)

    Hagemo, Jostein S; Jørgensen, Jørgen; Ostrowski, Sisse R

    2013-01-01

    Impaired haemostasis following shock and tissue trauma is frequently detected in the trauma setting. These changes occur early, and are associated with increased mortality. The mechanism behind trauma-induced coagulopathy (TIC) is not clear. Several studies highlight the crucial role of fibrinogen...... in posttraumatic haemorrhage. This study explores the coagulation changes in a swine model of early TIC, with emphasis on fibrinogen levels and utilization of fibrinogen....

  1. Stochastic model reduction for robust dynamical characterization of structures with random parameters

    Science.gov (United States)

    Ghienne, Martin; Blanzé, Claude; Laurent, Luc

    2017-12-01

    In this paper, we characterize random eigenspaces with a non-intrusive method based on the decoupling of random eigenvalues from their corresponding random eigenvectors. This method allows us to estimate the first statistical moments of the random eigenvalues of the system with a reduced number of deterministic finite element computations. The originality of this work is to adapt the method used to estimate each random eigenvalue depending on a global accuracy requirement. This allows us to ensure a minimal computational cost. The stochastic model of the structure is thus reduced by exploiting specific properties of random eigenvectors associated with the random eigenfrequencies being sought. An indicator with no additional computation cost is proposed to identify when the method needs to be enhanced. Finally, a simple three-beam frame and an industrial structure illustrate the proposed approach.

  2. Business model innovation for Local Energy Management: a perspective from Swiss utilities

    Directory of Open Access Journals (Sweden)

    Emanuele Facchinetti

    2016-08-01

    Full Text Available The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential Local Energy Management stakeholders and policy makers a conceptual framework guiding the Local Energy Management business model innovation. The main determinants characterizing Local Energy Management concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the Local Energy Management business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights to policy makers on challenges and opportunities related to Local Energy Management.

  3. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  4. Activated aging dynamics and effective trap model description in the random energy model

    Science.gov (United States)

    Baity-Jesi, M.; Biroli, G.; Cammarota, C.

    2018-01-01

    We study the out-of-equilibrium aging dynamics of the random energy model (REM) ruled by a single spin-flip Metropolis dynamics. We focus on the dynamical evolution taking place on time-scales diverging with the system size. Our aim is to show to what extent the activated dynamics displayed by the REM can be described in terms of an effective trap model. We identify two time regimes: the first one corresponds to the process of escaping from a basin in the energy landscape and to the subsequent exploration of high energy configurations, whereas the second one corresponds to the evolution from a deep basin to the other. By combining numerical simulations with analytical arguments we show why the trap model description does not hold in the former but becomes exact in the second.

  5. Modeling observation error and its effects in a random walk/extinction model.

    Science.gov (United States)

    Buonaccorsi, John P; Staudenmayer, John; Carreras, Maximo

    2006-11-01

    This paper examines the consequences of observation errors for the "random walk with drift", a model that incorporates density independence and is frequently used in population viability analysis. Exact expressions are given for biases in estimates of the mean, variance and growth parameters under very general models for the observation errors. For other quantities, such as the finite rate of increase, and probabilities about population size in the future we provide and evaluate approximate expressions. These expressions explain the biases induced by observation error without relying exclusively on simulations, and also suggest ways to correct for observation error. A secondary contribution is a careful discussion of observation error models, presented in terms of either log-abundance or abundance. This discussion recognizes that the bias and variance in observation errors may change over time, the result of changing sampling effort or dependence on the underlying population being sampled.

  6. Modelling and multi objective optimization of laser peening process using Taguchi utility concept

    Science.gov (United States)

    Ranjith Kumar, G.; Rajyalakshmi, G.

    2017-11-01

    Laser peening is considered as one of the innovative surface treatment technique. This work focuses on determining the optimal peening parameters for finding optimal responses like residual stresses and deformation. The modelling was done using ANSYS and values are optimised using Taguchi Utility concept for simultaneous optimization of responses. Three parameters viz. overlap; Pulse duration and Pulse density are considered as process parameters for modelling and optimization. Through Multi objective optimization, it is showing that Overlap is showing maximum influence on Stress and deformation followed by Power density and pulse duration.

  7. Random materials modeling : Statistical approach proposal for recycling materials

    OpenAIRE

    Jeong, Jena; Wang, L.; Schmidt, Franziska; LEKLOU, NORDINE; Ramezani, Hamidreza

    2015-01-01

    The current paper aims to promote the application of demolition waste on civil constructions. To achieve this assaignement, two main physcical properties, i.e. dry density and water absoption of the recycled aggregates have been chosen and studied at the first stage. The materail moduli of the recycled materials, i.e. the Lamé's coefficients, and strongly depend on the porosity. Moreover, the recycling materials should be considered as random materials. As a result, the statistical approach...

  8. Utilizing the CIPP Model as a Means to Develop an Integrated Service-Learning Component in a University Health Course

    Science.gov (United States)

    Powell, Brent; Conrad, Eric

    2015-01-01

    Purpose: To examine the enhancement of a university health course through the utilization of the CIPP Model as a means to develop an integrated service-learning component. Methods: The CIPP model was utilized in two concurrent semesters of an undergraduate health course in order to design and evaluate the implementation of a drug and alcohol…

  9. Modelling mesoporous alumina microstructure with 3D random models of platelets.

    Science.gov (United States)

    Wang, H; Pietrasanta, A; Jeulin, D; Willot, F; Faessel, M; Sorbier, L; Moreaud, M

    2015-12-01

    This work focuses on a mesoporous material made up of nanometric alumina 'platelets' of unknown shape. We develope a 3D random microstructure to model the porous material, based on 2D transmission electron microscopy (TEM) images, without prior knowledge on the spatial distribution of alumina inside the material. The TEM images, acquired on samples with thickness 300 nm, a scale much larger than the platelets's size, are too blurry and noisy to allow one to distinguish platelets or platelets aggregates individually. In a first step, the TEM images correlation function and integral range are estimated. The presence of long-range fluctuations, due to the TEM inhomogeneous detection, is detected and corrected by filtering. The corrected correlation function is used as a morphological descriptor for the model. After testing a Boolean model of platelets, a two-scale model of microstructure is introduced to replicate the statistical dispersion of platelets observed on TEM images. Accordingly, a set of two-scale Boolean models with varying physically admissible platelets shapes is proposed. Upon optimization, the model takes into account the dispersion of platelets in the microstructure as observed on TEM images. Comparing it to X-ray diffraction and nitrogen porosimetry data, the model is found to be in good agreement with the material in terms of specific surface area. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  10. The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.

    Science.gov (United States)

    Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot

    2006-05-01

    Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.

  11. A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2016-01-01

    In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…

  12. Fully Automatic Myocardial Segmentation of Contrast Echocardiography Sequence Using Random Forests Guided by Shape Model.

    Science.gov (United States)

    Li, Yuanwei; Ho, Chin Pang; Toulemonde, Matthieu; Chahal, Navtej; Senior, Roxy; Tang, Meng-Xing

    2017-09-26

    Myocardial contrast echocardiography (MCE) is an imaging technique that assesses left ventricle function and myocardial perfusion for the detection of coronary artery diseases. Automatic MCE perfusion quantification is challenging and requires accurate segmentation of the myocardium from noisy and time-varying images. Random forests (RF) have been successfully applied to many medical image segmentation tasks. However, the pixel-wise RF classifier ignores contextual relationships between label outputs of individual pixels. RF which only utilizes local appearance features is also susceptible to data suffering from large intensity variations. In this paper, we demonstrate how to overcome the above limitations of classic RF by presenting a fully automatic segmentation pipeline for myocardial segmentation in full-cycle 2D MCE data. Specifically, a statistical shape model is used to provide shape prior information that guide the RF segmentation in two ways. First, a novel shape model (SM) feature is incorporated into the RF framework to generate a more accurate RF probability map. Second, the shape model is fitted to the RF probability map to refine and constrain the final segmentation to plausible myocardial shapes. We further improve the performance by introducing a bounding box detection algorithm as a preprocessing step in the segmentation pipeline. Our approach on 2D image is further extended to 2D+t sequences which ensures temporal consistency in the final sequence segmentations. When evaluated on clinical MCE datasets, our proposed method achieves notable improvement in segmentation accuracy and outperforms other state-of-the-art methods including the classic RF and its variants, active shape model and image registration.

  13. Numerical Simulation of Entropy Growth for a Nonlinear Evolutionary Model of Random Markets

    Directory of Open Access Journals (Sweden)

    Mahdi Keshtkar

    2016-01-01

    Full Text Available In this communication, the generalized continuous economic model for random markets is revisited. In this model for random markets, agents trade by pairs and exchange their money in a random and conservative way. They display the exponential wealth distribution as asymptotic equilibrium, independently of the effectiveness of the transactions and of the limitation of the total wealth. In the current work, entropy of mentioned model is defined and then some theorems on entropy growth of this evolutionary problem are given. Furthermore, the entropy increasing by simulation on some numerical examples is verified.

  14. A unifying framework for marginalized random intercept models of correlated binary outcomes

    Science.gov (United States)

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  15. A unifying framework for marginalized random intercept models of correlated binary outcomes.

    Science.gov (United States)

    Swihart, Bruce J; Caffo, Brian S; Crainiceanu, Ciprian M

    2014-08-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.

  16. Laboratory, Field, and Modeling Studies of Aerobic Cometabolism of CAHs by Butane-Utilizing Microorganisms

    Science.gov (United States)

    Mathias, M.; Semprini, L.; Dolan, M. E.; McCarty, P. L.; Hopkins, G. D.

    2002-12-01

    The ability of butane-utilizing microorganisms to aerobically cometabolize a mixture of chlorinated aliphatic hydrocarbons (CAHs) in laboratory microcosms and in an in-situ field demonstration was modeled using parameter values measured in laboratory experiments. The butane grown culture was inoculated into soil and groundwater microcosms and exposed to butane with several repeated additions of 1,1,1-trichloroethane (TCA), 1,1-dichloroethylene (1,1-DCE), and 1,1-dichloroethane (1,1-DCA) at aqueous concentrations of 200 μg/L, 100 μg/L, and 200 μg/L, respectively. The utilization of butane and the transformation of the CAH mixture in the batch microcosms were simulated using differential equations accounting for Michaelis-Menten kinetics with cell growth and decay, substrate utilization, transformation product toxicity, and substrate inhibition of CAH transformation. Both competitive inhibition kinetics and mixed inhibition kinetics, determined in prior laboratory studies, were included in the model construct. The equations were solved simultaneously using fourth-order Runge-Kutta numerical integration. The batch microcosm experimental results were simulated well with parameter values determined independently in culture kinetic studies, with some minor adjustments. Having adequately defined the parameter values from laboratory studies, the biotransformation model was combined with 1-D advective-dispersive transport to simulate the results of in-situ bioremediation tests conducted at the Moffett Field Test Facility in CA. The butane-utilizing culture was injected into a 7 m subsurface test site and exposed to alternating pulses of oxygen and butane, along with TCA (150 μg/L), 1,1-DCE (50 μg/L) and 1,1-DCA (150 μg/L). The model simulated well the transient transformation of the CAHs in response to different butane and oxygen pulse cycles and injection concentrations. Model simulations correlated well with field results and indicated that better remediation

  17. Identification and estimation of nonseparable single-index models in panel data with correlated random effects

    NARCIS (Netherlands)

    Cizek, Pavel; Lei, Jinghua

    The identification in a nonseparable single-index models with correlated random effects is considered in panel data with a fixed number of time periods. The identification assumption is based on the correlated random effects structure. Under this assumption, the parameters of interest are identified

  18. Identification and Estimation of Nonseparable Single-Index Models in Panel Data with Correlated Random Effects

    NARCIS (Netherlands)

    Cizek, P.; Lei, J.

    2013-01-01

    Abstract: The identification of parameters in a nonseparable single-index models with correlated random effects is considered in the context of panel data with a fixed number of time periods. The identification assumption is based on the correlated random-effect structure: the distribution of

  19. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI. Ba...

  20. Local lattice relaxations in random metallic alloys: Effective tetrahedron model and supercell approach

    DEFF Research Database (Denmark)

    Ruban, Andrei; Simak, S.I.; Shallcross, S.

    2003-01-01

    We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri...

  1. Clinical Utility of the DSM-5 Alternative Model of Personality Disorders

    DEFF Research Database (Denmark)

    Bach, Bo; Markon, Kristian; Simonsen, Erik

    2015-01-01

    able to characterize the 6 cases in a meaningful and useful manner with regard to understanding and treatment of the individual patient and to match the cases with 6 relevant personality disorder types. Implications for ease of use, communication, and psychotherapy are discussed. Conclusion. Our......In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... was to evaluate the clinical utility of this alternative model of personality disorders. Method. We administered the LPFS and the PID-5 to psychiatric outpatients diagnosed with personality disorders and other nonpsychotic disorders. The personality profiles of six characteristic patients were inspected...

  2. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar; Tom, Nathan

    2017-09-01

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  3. Promoting remyelination: utilizing a viral model of demyelination to assess cell-based therapies.

    Science.gov (United States)

    Marro, Brett S; Blanc, Caroline A; Loring, Jeanne F; Cahalan, Michael D; Lane, Thomas E

    2014-10-01

    Multiple sclerosis (MS) is a chronic inflammatory disease of the CNS. While a broad range of therapeutics effectively reduce the incidence of focal white matter inflammation and plaque formation for patients with relapse-remitting forms of MS, a challenge within the field is to develop therapies that allow for axonal protection and remyelination. In the last decade, growing interest has focused on utilizing neural precursor cells (NPCs) to promote remyelination. To understand how NPCs function in chronic demyelinating environments, several excellent pre-clinical mouse models have been developed. One well accepted model is infection of susceptible mice with neurotropic variants of mouse hepatitis virus (MHV) that undergo chronic demyelination exhibiting clinical and histopathologic similarities to MS patients. Combined with the possibility that an environmental agent such as a virus could trigger MS, the MHV model of demyelination presents a relevant mouse model to assess the therapeutic potential of NPCs transplanted into an environment in which inflammatory-mediated demyelination is established.

  4. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  5. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    Science.gov (United States)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  6. Evaluating components of dental care utilization among adults with diabetes and matched controls via hurdle models

    Directory of Open Access Journals (Sweden)

    Chaudhari Monica

    2012-07-01

    Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p  0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p  Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.

  7. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    Science.gov (United States)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  8. Implications of random variation in the Stand Prognosis Model

    Science.gov (United States)

    David A. Hamilton

    1991-01-01

    Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...

  9. Modeling and Optimizing Energy Utilization of Steel Production Process: A Hybrid Petri Net Approach

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2013-01-01

    Full Text Available The steel industry is responsible for nearly 9% of anthropogenic energy utilization in the world. It is urgent to reduce the total energy utilization of steel industry under the huge pressures on reducing energy consumption and CO2 emission. Meanwhile, the steel manufacturing is a typical continuous-discrete process with multiprocedures, multiobjects, multiconstraints, and multimachines coupled, which makes energy management rather difficult. In order to study the energy flow within the real steel production process, this paper presents a new modeling and optimization method for the process based on Hybrid Petri Nets (HPN in consideration of the situation above. Firstly, we introduce the detailed description of HPN. Then the real steel production process from one typical integrated steel plant is transformed into Hybrid Petri Net model as a case. Furthermore, we obtain a series of constraints of our optimization model from this model. In consideration of the real process situation, we pick the steel production, energy efficiency and self-made gas surplus as the main optimized goals in this paper. Afterwards, a fuzzy linear programming method is conducted to obtain the multiobjective optimization results. Finally, some measures are suggested to improve this low efficiency and high whole cost process structure.

  10. Utility of ictal single photon emission computed tomography in mesial temporal lobe epilepsy with hippocampal atrophy: a randomized trial.

    Science.gov (United States)

    Velasco, Tonicarlo R; Wichert-Ana, Lauro; Mathern, Gary W; Araújo, David; Walz, Roger; Bianchin, Marino M; Dalmagro, Charles L; Leite, Joao P; Santos, Antonio C; Assirati, Joao A; Carlotti, Carlos G; Sakamoto, Americo C

    2011-02-01

    The development of newer diagnostic technologies has reduced the need for invasive electroencephalographic (EEG) studies in identifying the epileptogenic zone, especially in adult patients with mesial temporal lobe epilepsy and hippocampal sclerosis (MTLE-HS). To evaluate ictal single photon emission computed tomography (SPECT) in the evaluation and treatment of patients with MTLE-HS. MTLE patients were randomly assigned to those with (SPECT, n = 124) and without ictal SPECT (non-SPECT, n = 116) in an intent-to-treat protocol. Primary end points were the proportion of patients with invasive EEG studies, and those offered surgery. Secondary end points were the length of hospital stay and the proportion of patients with secondarily generalized seizures (SGS) during video-EEG, postsurgical seizure outcome, and hospital cost. The proportion of patients offered surgery was similar in the SPECT (85%) and non-SPECT groups (81%), as well as the proportion that had invasive EEG studies (27% vs 23%). The mean duration of hospital stay was 1 day longer for the SPECT group (P stay was associated with increased costs and a higher chance of SGS during video-EEG monitoring. These findings support the notion that a protocol including ictal SPECT is equivalent to one without SPECT in the presurgical evaluation of adult patients with MTLE-HS.

  11. A randomized, controlled trial to assess short-term black pepper consumption on 24-hour energy expenditure and substrate utilization

    Directory of Open Access Journals (Sweden)

    Annalouise O’Connor

    2013-10-01

    Full Text Available ABSTRACTBackground: Thermogenic ingredients may play a role in weight management. In vitro and rodent work suggests that components of black pepper may impact energy expenditure, and in humans, other TPRV1 agonists e.g. capsaicin, augment EE. Objectives: To determine the impact of BP on 24-hour EE, respiratory quotient, and biochemical markers of metabolism and satiety, a randomized, controlled, cross-over study of black pepper (0.5mg/meal versus no pepper control was conducted in post-menopausal women. Subjects spent two 24-hour periods in a whole room indirect calorimeter. Results: Post-meal glucose, insulin, gut peptides and catecholamines were measured. Energy expenditure, respiratory quotient, or biochemical markers assessed did not differ significantly between the black pepper and no pepper control study days. Conclusions: Our findings do not support a role for black pepper in modulating energy expenditure in overweight postmenopausal women. Future work targeting alternative populations, administering black pepper in the fasted state, or in combination with other spices, may reveal the thermogenic effect of this spice.Trial registration: This trial was registered at clinicaltrials.gov (NCT01729143.Key words: Black pepper, piperine, energy expenditure, metabolic chamber

  12. The utilization of a diode laser in the surgical treatment of peri-implantitis. A randomized clinical trial.

    Science.gov (United States)

    Papadopoulos, Christos A; Vouros, Ioannis; Menexes, Georgios; Konstantinidis, Antonis

    2015-11-01

    A comparison of different treatment modalities of peri-implantitis can lead to the development and application of more effective and efficient methods of therapy in clinical practice. This study compares the effectiveness of open flap debridement used alone, with an approach employing the additional use of a diode laser for the treatment of peri-implantitis. Nineteen patients were divided into two groups and treated for peri-implantitis. In the control group (C group), the therapy utilized access flaps, plastic curettes, and sterilized gauzes soaked in saline. The test group (L group) was treated similarly but with additional irradiation using a diode laser. The parameters studied were pocket depth (PD) as the primary outcome variable, clinical attachment level (CAL), bleeding on probing (BOP), and plaque index (PI) as secondary variables. Measurements were performed at three different time points, baseline (BSL), 3 months, and 6 months after treatment. Three months after treatment, a mean PD reduction of 1.19 mm for the control group and 1.38 mm for the laser group was recorded. The corresponding BOP changes were 72.9 and 66.7%, respectively. These changes were significant and remained at the same levels at the 6-month examination (p diode laser does not seem to have an extra beneficiary effect. The additional use of a diode laser in the surgical treatment of peri-implantitis offers a limited clinical benefit.

  13. The Random Walk Model Based on Bipartite Network

    Directory of Open Access Journals (Sweden)

    Zhang Man-Dun

    2016-01-01

    Full Text Available With the continuing development of the electronic commerce and growth of network information, there is a growing possibility for citizens to be confused by the information. Though the traditional technology of information retrieval have the ability to relieve the overload of information in some extent, it can not offer a targeted personality service based on user’s interests and activities. In this context, the recommendation algorithm arose. In this paper, on the basis of conventional recommendation, we studied the scheme of random walk based on bipartite network and the application of it. We put forward a similarity measurement based on implicit feedback. In this method, a uneven character vector is imported(the weight of item in the system. We put forward a improved random walk pattern which make use of partial or incomplete neighbor information to create recommendation information. In the end, there is an experiment in the real data set, the recommendation accuracy and practicality are improved. We promise the reality of the result of the experiment

  14. Random regression test-day model for the analysis of dairy cattle ...

    African Journals Online (AJOL)

    Random regression test-day model for the analysis of dairy cattle production data in South Africa: Creating the framework. EF Dzomba, KA Nephawe, AN Maiwashe, SWP Cloete, M Chimonyo, CB Banga, CJC Muller, K Dzama ...

  15. Modeling Interprovincial Cooperative Energy Saving in China: An Electricity Utilization Perspective

    Directory of Open Access Journals (Sweden)

    Lijun Zeng

    2018-01-01

    Full Text Available As the world faces great challenges from climate change and environmental pollution, China urgently requires energy saving, emission reduction, and carbon reduction programmes. However, the non-cooperative energy saving model (NCESM, the simple regulation mode that is China’s main model for energy saving, is not beneficial for optimization of energy and resource distribution, and cannot effectively motivate energy saving at the provincial level. Therefore, we propose an interprovincial cooperative energy saving model (CESM from the perspective of electricity utilization, with the object of maximizing the benefits from electricity utilization of the cooperation union based on achieving the energy saving goals of the union as a whole. The CESM consists of two parts: (1 an optimization model that calculates the optimal quantities of electricity consumption for each participating province to meet the joint energy saving goal; and (2 a model that distributes the economic benefits of the cooperation among the provinces in the cooperation based on the Shapley value method. We applied the CESM to the case of an interprovincial union of Shanghai, Sichuan, Shanxi, and Gansu in China. The results, based on the data from 2001–2014, show that cooperation can significantly increase the benefits of electricity utilization for each province in the union. The total benefits of the union from utilization of electricity increased 38.38%, or 353.98 billion CNY, while the benefits to Shanghai, Sichuan, Shanxi, and Gansu were 200.28, 58.37, 57.11, and 38.22 billion CNY respectively greater under the CESM than the NCESM. The implementation of the CESM provides the provincial governments not only a flexible and incentive way to achieve short-term goals, but also a feasible and effective path to realize long-term energy saving strategies. To test the impact of different parameter values on the results of the CESM, a sensitivity analysis was conducted. Some policy

  16. Utilizing Soize's Approach to Identify Parameter and Model Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Univ. of Wisconsin, Madison, WI (United States); Brake, Matthew Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.

  17. Random walk and graph cut based active contour model for three-dimension interactive pituitary adenoma segmentation from MR images

    Science.gov (United States)

    Sun, Min; Chen, Xinjian; Zhang, Zhiqiang; Ma, Chiyuan

    2017-02-01

    Accurate volume measurements of pituitary adenoma are important to the diagnosis and treatment for this kind of sellar tumor. The pituitary adenomas have different pathological representations and various shapes. Particularly, in the case of infiltrating to surrounding soft tissues, they present similar intensities and indistinct boundary in T1-weighted (T1W) magnetic resonance (MR) images. Then the extraction of pituitary adenoma from MR images is still a challenging task. In this paper, we propose an interactive method to segment the pituitary adenoma from brain MR data, by combining graph cuts based active contour model (GCACM) and random walk algorithm. By using the GCACM method, the segmentation task is formulated as an energy minimization problem by a hybrid active contour model (ACM), and then the problem is solved by the graph cuts method. The region-based term in the hybrid ACM considers the local image intensities as described by Gaussian distributions with different means and variances, expressed as maximum a posteriori probability (MAP). Random walk is utilized as an initialization tool to provide initialized surface for GCACM. The proposed method is evaluated on the three-dimensional (3-D) T1W MR data of 23 patients and compared with the standard graph cuts method, the random walk method, the hybrid ACM method, a GCACM method which considers global mean intensity in region forces, and a competitive region-growing based GrowCut method planted in 3D Slicer. Based on the experimental results, the proposed method is superior to those methods.

  18. Evaluation of a black-footed ferret resource utilization function model

    Science.gov (United States)

    Eads, D.A.; Millspaugh, J.J.; Biggins, D.E.; Jachowski, D.S.; Livieri, T.M.

    2011-01-01

    Resource utilization function (RUF) models permit evaluation of potential habitat for endangered species; ideally such models should be evaluated before use in management decision-making. We evaluated the predictive capabilities of a previously developed black-footed ferret (Mustela nigripes) RUF. Using the population-level RUF, generated from ferret observations at an adjacent yet distinct colony, we predicted the distribution of ferrets within a black-tailed prairie dog (Cynomys ludovicianus) colony in the Conata Basin, South Dakota, USA. We evaluated model performance, using data collected during post-breeding spotlight surveys (2007-2008) by assessing model agreement via weighted compositional analysis and count-metrics. Compositional analysis of home range use and colony-level availability, and core area use and home range availability, demonstrated ferret selection of the predicted Very high and High occurrence categories in 2007 and 2008. Simple count-metrics corroborated these findings and suggested selection of the Very high category in 2007 and the Very high and High categories in 2008. Collectively, these results suggested that the RUF was useful in predicting occurrence and intensity of space use of ferrets at our study site, the 2 objectives of the RUF. Application of this validated RUF would increase the resolution of habitat evaluations, permitting prediction of the distribution of ferrets within distinct colonies. Additional model evaluation at other sites, on other black-tailed prairie dog colonies of varying resource configuration and size, would increase understanding of influences upon model performance and the general utility of the RUF. ?? 2011 The Wildlife Society.

  19. Modeling and understanding of effects of randomness in arrays of resonant meta-atoms

    DEFF Research Database (Denmark)

    Tretyakov, Sergei A.; Albooyeh, Mohammad; Alitalo, Pekka

    2013-01-01

    In this review presentation we will discuss approaches to modeling and understanding electromagnetic properties of 2D and 3D lattices of small resonant particles (meta-atoms) in transition from regular (periodic) to random (amorphous) states. Nanostructured metasurfaces (2D) and metamaterials (3D......) are arrangements of optically small but resonant particles (meta-atoms). We will present our results on analytical modeling of metasurfaces with periodical and random arrangements of electrically and magnetically resonant meta-atoms with identical or random sizes, both for the normal and oblique-angle excitations......) of the arrangements of meta-atoms....

  20. Estimating random-intercept models on data streams

    NARCIS (Netherlands)

    Ippel, L.; Kaptein, M.C.; Vermunt, J.K.

    2016-01-01

    Multilevel models are often used for the analysis of grouped data. Grouped data occur for instance when estimating the performance of pupils nested within schools or analyzing multiple observations nested within individuals. Currently, multilevel models are mostly fit to static datasets. However,

  1. Covariance Functions and Random Regression Models in the ...

    African Journals Online (AJOL)

    ARC-IRENE

    many, highly correlated measures (Meyer, 1998a). Several approaches have been proposed to deal with such data, from simplest repeatability models (SRM) to complex multivariate models (MTM). The SRM considers different measurements at different stages (ages) as a realization of the same genetic trait with constant.

  2. A restricted dimer model on a two-dimensional random causal triangulation

    DEFF Research Database (Denmark)

    Ambjørn, Jan; Durhuus, Bergfinnur; Wheater, J. F.

    2014-01-01

    We introduce a restricted hard dimer model on a random causal triangulation that is exactly solvable and generalizes a model recently proposed by Atkin and Zohren (2012 Phys. Lett. B 712 445–50). We show that the latter model exhibits unusual behaviour at its multicritical point; in particular, its...

  3. Recognizing cesarean delivery on maternal request as a social problem: utilizing the public arenas model.

    Science.gov (United States)

    Yamamoto, Sherry L

    2011-08-01

    Nearly one in three babies in the United States are now born surgically. While many causes for this surge in cesareans have been suggested, the phenomenon of cesarean delivery on maternal request (CDMR) has been the subject of the most controversy. Utilizing Hilgartner and Bosk's public arenas model, this article examines the ways in which CDMR has been framed and a collective definition of the problem established. Recognizing CDMR as a social problem is the first step to creating policies to ensure that the health and safety of mothers and babies are protected.

  4. Modeling multiple experiments using regularized optimization: A case study on bacterial glucose utilization dynamics.

    Science.gov (United States)

    Hartmann, András; Lemos, João M; Vinga, Susana

    2015-08-01

    The aim of inverse modeling is to capture the systems׳ dynamics through a set of parameterized Ordinary Differential Equations (ODEs). Parameters are often required to fit multiple repeated measurements or different experimental conditions. This typically leads to a multi-objective optimization problem that can be formulated as a non-convex optimization problem. Modeling of glucose utilization of Lactococcus lactis bacteria is considered using in vivo Nuclear Magnetic Resonance (NMR) measurements in perturbation experiments. We propose an ODE model based on a modified time-varying exponential decay that is flexible enough to model several different experimental conditions. The starting point is an over-parameterized non-linear model that will be further simplified through an optimization procedure with regularization penalties. For the parameter estimation, a stochastic global optimization method, particle swarm optimization (PSO) is used. A regularization is introduced to the identification, imposing that parameters should be the same across several experiments in order to identify a general model. On the remaining parameter that varies across the experiments a function is fit in order to be able to predict new experiments for any initial condition. The method is cross-validated by fitting the model to two experiments and validating the third one. Finally, the proposed model is integrated with existing models of glycolysis in order to reconstruct the remaining metabolites. The method was found useful as a general procedure to reduce the number of parameters of unidentifiable and over-parameterized models, thus supporting feature selection methods for parametric models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Completely random measures for modelling block-structured sparse networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2016-01-01

    Many statistical methods for network data parameterize the edge-probability by attributing latent traits to the vertices such as block structure and assume exchangeability in the sense of the Aldous-Hoover representation theorem. Empirical studies of networks indicate that many real-world networks...... [2014] proposed the use of a different notion of exchangeability due to Kallenberg [2006] and obtained a network model which admits power-law behaviour while retaining desirable statistical properties, however this model does not capture latent vertex traits such as block-structure. In this work we re......-introduce the use of block-structure for network models obeying allenberg’s notion of exchangeability and thereby obtain a model which admits the inference of block-structure and edge inhomogeneity. We derive a simple expression for the likelihood and an efficient sampling method. The obtained model...

  6. A spatial error model with continuous random effects and an application to growth convergence

    Science.gov (United States)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  7. Prevention of low back pain: effect, cost-effectiveness, and cost-utility of maintenance care - study protocol for a randomized clinical trial

    DEFF Research Database (Denmark)

    Eklund, Andreas; Axén, Iben; Kongsted, Alice

    2014-01-01

    BACKGROUND: Low back pain (LBP) is a prevalent condition and a socioeconomic problem in many countries. Due to its recurrent nature, the prevention of further episodes (secondary prevention), seems logical. Furthermore, when the condition is persistent, the minimization of symptoms and prevention...... of deterioration (tertiary prevention), is equally important. Research has largely focused on treatment methods for symptomatic episodes, and little is known about preventive treatment strategies. METHODS: This study protocol describes a randomized controlled clinical trial in a multicenter setting investigating...... is the number of days with bothersome pain over 12 months. Secondary measures are self-rated health (EQ-5D), function (the Roland Morris Disability Questionnaire), psychological profile (the Multidimensional Pain Inventory), pain intensity (the Numeric Rating Scale), and work absence.The primary utility measure...

  8. Stochastic analysis model for vehicle-track coupled systems subject to earthquakes and track random irregularities

    Science.gov (United States)

    Xu, Lei; Zhai, Wanming

    2017-10-01

    This paper devotes to develop a computational model for stochastic analysis and reliability assessment of vehicle-track systems subject to earthquakes and track random irregularities. In this model, the earthquake is expressed as non-stationary random process simulated by spectral representation and random function, and the track random irregularities with ergodic properties on amplitudes, wavelengths and probabilities are characterized by a track irregularity probabilistic model, and then the number theoretical method (NTM) is applied to effectively select representative samples of earthquakes and track random irregularities. Furthermore, a vehicle-track coupled model is presented to obtain the dynamic responses of vehicle-track systems due to the earthquakes and track random irregularities at time-domain, and the probability density evolution method (PDEM) is introduced to describe the evolutionary process of probability from excitation input to response output by assuming the vehicle-track system as a probabilistic conservative system, which lays the foundation on reliability assessment of vehicle-track systems. The effectiveness of the proposed model is validated by comparing to the results of Monte-Carlo method from statistical viewpoint. As an illustrative example, the random vibrations of a high-speed railway vehicle running on the track slabs excited by lateral seismic waves and track random irregularities are analyzed, from which some significant conclusions can be drawn, e.g., track irregularities will additionally promote the dynamic influence of earthquakes especially on maximum values and dispersion degree of responses; the characteristic frequencies or frequency ranges respectively governed by earthquakes and track random irregularities are greatly different, moreover, the lateral seismic waves will dominate or even change the characteristic frequencies of system responses of some lateral dynamic indices at low frequency.

  9. On the Path to SunShot. Utility Regulatory and Business Model Reforms for Addressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.

  10. Secure identity-based encryption in the quantum random oracle model

    Science.gov (United States)

    Zhandry, Mark

    2015-04-01

    We give the first proof of security for an identity-based encryption (IBE) scheme in the quantum random oracle model. This is the first proof of security for any scheme in this model that does not rely on the assumed existence of so-called quantum-secure pseudorandom functions (PRFs). Our techniques are quite general and we use them to obtain security proofs for two random oracle hierarchical IBE schemes and a random oracle signature scheme, all of which have previously resisted quantum security proofs, even assuming quantum-secure PRFs. We also explain how to remove quantum-secure PRFs from prior quantum random oracle model proofs. We accomplish these results by developing new tools for arguing that quantum algorithms cannot distinguish between two oracle distributions. Using a particular class of oracle distributions that we call semi-constant distributions, we argue that the aforementioned cryptosystems are secure against quantum adversaries.

  11. Additive and subtractive scrambling in optional randomized response modeling.

    Directory of Open Access Journals (Sweden)

    Zawar Hussain

    Full Text Available This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.

  12. Equilibria in a Random Viewer Model of Television Broadcasting

    DEFF Research Database (Denmark)

    Olai Hansen, Bodil; Keiding, Hans

    2014-01-01

    The authors considered a model of commercial television market with advertising with probabilistic viewer choice of channel, where private broadcasters may coexist with a public television broadcaster. The broadcasters influence the probability of getting viewer attention through the amount...

  13. Child Abuse: A Black Perspective Utilizing a Social-Psychological Model

    Science.gov (United States)

    Butts, Hugh F.

    1979-01-01

    The majority of models utilized in the formulation of the dynamics of child abuse rely upon an individual psychopathological frame of reference. Not only is this approach limited, but it renders primary preventive approaches virtually impossible. The author presents a social-psychological model, with the recommendation that it be applied among blacks. Essential to the model's applicability is the vulnerability of blacks to institutionalized racism and to the universal and destructive institutional abuse to which blacks are subjected. While often quite covert, this abuse is nonetheless extremely noxious, and serves to potentiate the view blacks have of themselves as undervalued individuals, and as individuals who have no alternative other than to commit abuse to others. Child abuse in blacks is viewed as reactive in nature—reactive to societal abuse. This adaptational model of child abuse, rather than precluding an individual psychopathological model, complements it. Use of this model should facilitate primary prevention with respect to child abuse. Current approaches to child abuse are comparable to “an ambulance service at the bottom of a cliff.” What is lacking is an approach that will “fix the road on the cliff that causes the accidents.” Only by examining the intricate interplay between individual and society can the factors that lead to child abuse be modified. PMID:501758

  14. The limiting behavior of the estimated parameters in a misspecified random field regression model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Qin, Yu

    This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection......, as a consequence the random field model specification introduces non-stationarity and non-ergodicity in the misspecified model and it becomes non-trivial, relative to the existing literature, to establish the limiting behavior of the estimated parameters. The asymptotic results are obtained by applying some...

  15. Restoration of dimensional reduction in the random-field Ising model at five dimensions.

    Science.gov (United States)

    Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤DIsing model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  16. Outcomes and Lessons Learned From a Randomized Controlled Trial to Reduce Health Care Utilization During the First Year After Spinal Cord Injury Rehabilitation: Telephone Counseling Versus Usual Care.

    Science.gov (United States)

    Mackelprang, Jessica L; Hoffman, Jeanne M; Garbaccio, Chris; Bombardier, Charles H

    2016-10-01

    To describe the outcomes and lessons learned from a trial of telephone counseling (TC) to reduce medical complications and health care utilization and to improve psychosocial outcomes during the first year after spinal cord injury rehabilitation. Single-site, single-blind, randomized (1:1) controlled trial comparing usual care plus TC with usual care (UC). Two inpatient rehabilitation programs. Adult patients (N=168) discharged between 2007 and 2010. The TC group (n=85, 51%) received up to eleven 30- to 45-minute scheduled telephone calls to provide education, resources, and support. The UC group (n=83, 49%) received indicated referrals and treatment. The primary outcome was a composite of self-reported health care utilization and medical complications. Secondary outcomes were depression severity, current health state, subjective health, and community participation. No significant differences were observed between TC and UC groups in the primary or secondary psychosocial outcomes. This study had a number of strengths, but included potential design weaknesses. Intervention studies would benefit from prescreening participants to identify those with treatable problems, those at high risk for poor outcomes, or those with intentions to change target behaviors. Interventions focused on treatment goals and designed to work in collaboration with the participant's medical care system may lead to improved outcomes. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Cost-effectiveness and cost-utility of cognitive therapy, rational emotive behavioral therapy, and fluoxetine (Prozac) in treating depression: a randomized clinical trial.

    Science.gov (United States)

    Sava, Florin A; Yates, Brian T; Lupu, Viorel; Szentagotai, Aurora; David, Daniel

    2009-01-01

    Cost-effectiveness and cost-utility of cognitive therapy (CT), rational emotive behavioral therapy (REBT), and fluoxetine (Prozac) for major depressive disorder (MDD) were compared in a randomized clinical trial with a Romanian sample of 170 clients. Each intervention was offered for 14 weeks, plus three booster sessions. Beck Depression Inventory (BDI) scores were obtained prior to intervention, 7 and 14 weeks following the start of intervention, and 6 months following completion of intervention. CT, REBT, and fluoxetine did not differ significantly in changes in the BDI, depression-free days (DFDs), or Quality-Adjusted Life Years (QALYs). Average BDI scores decreased from 31.1 before treatment to 9.7 six months following completion of treatment. Due to lower costs, both psychotherapies were more cost-effective, and had better cost-utility, than pharmacotherapy: median $26.44/DFD gained/month for CT and $23.77/DFD gained/month for REBT versus $34.93/DFD gained/month for pharmacotherapy, median $/QALYs=$1,638, $1,734, and $2,287 for CT, REBT, and fluoxetine (Prozac), respectively. (c) 2008 Wiley Periodicals, Inc.

  18. INVESTIGATION OF QUANTIFICATION OF FLOOD CONTROL AND WATER UTILIZATION EFFECT OF RAINFALL INFILTRATION FACILITY BY USING WATER BALANCE ANALYSIS MODEL

    OpenAIRE

    文, 勇起; BUN, Yuki

    2013-01-01

    In recent years, many flood damage and drought attributed to urbanization has occurred. At present infiltration facility is suggested for the solution of these problems. Based on this background, the purpose of this study is investigation of quantification of flood control and water utilization effect of rainfall infiltration facility by using water balance analysis model. Key Words : flood control, water utilization , rainfall infiltration facility

  19. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    Science.gov (United States)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  20. Analytic model comparing the cost utility of TVT versus duloxetine in women with urinary stress incontinence.

    Science.gov (United States)

    Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee

    2010-08-01

    The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.

  1. A Cost-Utility Model of Care for Peristomal Skin Complications.

    Science.gov (United States)

    Neil, Nancy; Inglese, Gary; Manson, Andrea; Townshend, Arden

    2016-01-01

    The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. Cost-utility analysis. We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components.

  2. A Cost-Utility Model of Care for Peristomal Skin Complications

    Science.gov (United States)

    Inglese, Gary; Manson, Andrea; Townshend, Arden

    2016-01-01

    PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. RESULTS: Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. CONCLUSIONS: In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components. PMID:26633166

  3. Utilization of building information modeling in infrastructure’s design and construction

    Science.gov (United States)

    Zak, Josef; Macadam, Helen

    2017-09-01

    Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.

  4. Utilizing high throughput screening data for predictive toxicology models: protocols and application to MLSCN assays

    Science.gov (United States)

    Guha, Rajarshi; Schürer, Stephan C.

    2008-06-01

    Computational toxicology is emerging as an encouraging alternative to experimental testing. The Molecular Libraries Screening Center Network (MLSCN) as part of the NIH Molecular Libraries Roadmap has recently started generating large and diverse screening datasets, which are publicly available in PubChem. In this report, we investigate various aspects of developing computational models to predict cell toxicity based on cell proliferation screening data generated in the MLSCN. By capturing feature-based information in those datasets, such predictive models would be useful in evaluating cell-based screening results in general (for example from reporter assays) and could be used as an aid to identify and eliminate potentially undesired compounds. Specifically we present the results of random forest ensemble models developed using different cell proliferation datasets and highlight protocols to take into account their extremely imbalanced nature. Depending on the nature of the datasets and the descriptors employed we were able to achieve percentage correct classification rates between 70% and 85% on the prediction set, though the accuracy rate dropped significantly when the models were applied to in vivo data. In this context we also compare the MLSCN cell proliferation results with animal acute toxicity data to investigate to what extent animal toxicity can be correlated and potentially predicted by proliferation results. Finally, we present a visualization technique that allows one to compare a new dataset to the training set of the models to decide whether the new dataset may be reliably predicted.

  5. Prevention of low back pain: effect, cost-effectiveness, and cost-utility of maintenance care - study protocol for a randomized clinical trial.

    Science.gov (United States)

    Eklund, Andreas; Axén, Iben; Kongsted, Alice; Lohela-Karlsson, Malin; Leboeuf-Yde, Charlotte; Jensen, Irene

    2014-04-02

    Low back pain (LBP) is a prevalent condition and a socioeconomic problem in many countries. Due to its recurrent nature, the prevention of further episodes (secondary prevention), seems logical. Furthermore, when the condition is persistent, the minimization of symptoms and prevention of deterioration (tertiary prevention), is equally important. Research has largely focused on treatment methods for symptomatic episodes, and little is known about preventive treatment strategies. This study protocol describes a randomized controlled clinical trial in a multicenter setting investigating the effect and cost-effectiveness of preventive manual care (chiropractic maintenance care) in a population of patients with recurrent or persistent LBP.Four hundred consecutive study subjects with recurrent or persistent LBP will be recruited from chiropractic clinics in Sweden. The primary outcome is the number of days with bothersome pain over 12 months. Secondary measures are self-rated health (EQ-5D), function (the Roland Morris Disability Questionnaire), psychological profile (the Multidimensional Pain Inventory), pain intensity (the Numeric Rating Scale), and work absence.The primary utility measure of the study is quality-adjusted life years and will be calculated using the EQ-5D questionnaire. Direct medical costs as well as indirect costs will be considered.Subjects are randomly allocated into two treatment arms: 1) Symptom-guided treatment (patient controlled), receiving care when patients feel a need. 2) Preventive treatment (clinician controlled), receiving care on a regular basis. Eligibility screening takes place in two phases: first, when assessing the primary inclusion/exclusion criteria, and then to only include fast responders, i.e., subjects who respond well to initial treatment. Data are collected at baseline and at follow-up as well as weekly, using SMS text messages. This study investigates a manual strategy (chiropractic maintenance care) for recurrent and

  6. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S.H.; Son, J.E.; Lee, S.D.; Cho, S.I.; Ashtiani-Araghi, A.; Rhee, J.Y.

    2016-11-01

    If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE), which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE. (Author)

  7. On the utility of land surface models for agricultural drought monitoring

    Directory of Open Access Journals (Sweden)

    W. T. Crow

    2012-09-01

    Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.

  8. Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization

    Directory of Open Access Journals (Sweden)

    Seung-Hwan Yang

    2016-03-01

    Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.

  9. Effects of atmospheric variability on energy utilization and conservation. [Space heating energy demand modeling; Program HEATLOAD

    Energy Technology Data Exchange (ETDEWEB)

    Reiter, E.R.; Johnson, G.R.; Somervell, W.L. Jr.; Sparling, E.W.; Dreiseitly, E.; Macdonald, B.C.; McGuirk, J.P.; Starr, A.M.

    1976-11-01

    Research conducted between 1 July 1975 and 31 October 1976 is reported. A ''physical-adaptive'' model of the space-conditioning demand for energy and its response to changes in weather regimes was developed. This model includes parameters pertaining to engineering factors of building construction, to weather-related factors, and to socio-economic factors. Preliminary testing of several components of the model on the city of Greeley, Colorado, yielded most encouraging results. Other components, especially those pertaining to socio-economic factors, are still under development. Expansion of model applications to different types of structures and larger regions is presently underway. A CRT-display model for energy demand within the conterminous United States also has passed preliminary tests. A major effort was expended to obtain disaggregated data on energy use from utility companies throughout the United States. The study of atmospheric variability revealed that the 22- to 26-day vacillation in the potential and kinetic energy modes of the Northern Hemisphere is related to the behavior of the planetary long-waves, and that the midwinter dip in zonal available potential energy is reflected in the development of blocking highs. Attempts to classify weather patterns over the eastern and central United States have proceeded satisfactorily to the point where testing of our method for longer time periods appears desirable.

  10. Brain in flames – animal models of psychosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Mattei D

    2015-05-01

    Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models 

  11. Spectra of Anderson type models with decaying randomness

    Indian Academy of Sciences (India)

    Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45

    The literature on the scattering theoretic and commutator methods for discrete Laplacian includes those of Boutet de Monvel–Sahbani [4, 5] who study deterministic ... the free part in terms of the structure it has in its spectral representation. 2. Main results. The models we consider in this paper are related to the discrete ...

  12. Multilevel random effect and marginal models for longitudinal data ...

    African Journals Online (AJOL)

    The models were applied to data obtained from a phase-III clinical trial on a new meningococcal vaccine. The goal is to investigate whether children injected by the candidate vaccine have a lower or higher risk for the occurrence of specific adverse events than children injected with licensed vaccine, and if so, to quantify the ...

  13. Modeling species distribution and change using random forest [Chapter 8

    Science.gov (United States)

    Jeffrey S. Evans; Melanie A. Murphy; Zachary A. Holden; Samuel A. Cushman

    2011-01-01

    Although inference is a critical component in ecological modeling, the balance between accurate predictions and inference is the ultimate goal in ecological studies (Peters 1991; De’ath 2007). Practical applications of ecology in conservation planning, ecosystem assessment, and bio-diversity are highly dependent on very accurate spatial predictions of...

  14. Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-04-01

    Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and

  15. The Prediction Model of Dam Uplift Pressure Based on Random Forest

    Science.gov (United States)

    Li, Xing; Su, Huaizhi; Hu, Jiang

    2017-09-01

    The prediction of the dam uplift pressure is of great significance in the dam safety monitoring. Based on the comprehensive consideration of various factors, 18 parameters are selected as the main factors affecting the prediction of uplift pressure, use the actual monitoring data of uplift pressure as the evaluation factors for the prediction model, based on the random forest algorithm and support vector machine to build the dam uplift pressure prediction model to predict the uplift pressure of the dam, and the predict performance of the two models were compared and analyzed. At the same time, based on the established random forest prediction model, the significance of each factor is analyzed, and the importance of each factor of the prediction model is calculated by the importance function. Results showed that: (1) RF prediction model can quickly and accurately predict the uplift pressure value according to the influence factors, the average prediction accuracy is above 96%, compared with the support vector machine (SVM) model, random forest model has better robustness, better prediction precision and faster convergence speed, and the random forest model is more robust to missing data and unbalanced data. (2) The effect of water level on uplift pressure is the largest, and the influence of rainfall on the uplift pressure is the smallest compared with other factors.

  16. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    Science.gov (United States)

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Evolution models with lethal mutations on symmetric or random fitness landscapes.

    Science.gov (United States)

    Kirakosyan, Zara; Saakian, David B; Hu, Chin-Kun

    2010-07-01

    We calculate the mean fitness for evolution models, when the fitness is a function of the Hamming distance from a reference sequence, and there is a probability that this fitness is nullified (Eigen model case) or tends to the negative infinity (Crow-Kimura model case). We calculate the mean fitness of these models. The mean fitness is calculated also for the random fitnesses with logarithmic-normal distribution, reasonably describing sometimes the situation with RNA viruses.

  18. Entropy, complexity, and Markov diagrams for random walk cancer models.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  19. Entropy, complexity, and Markov diagrams for random walk cancer models

    Science.gov (United States)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  20. Rgbp: An R Package for Gaussian, Poisson, and Binomial Random Effects Models with Frequency Coverage Evaluations

    Directory of Open Access Journals (Sweden)

    Hyungsuk Tak

    2017-06-01

    Full Text Available Rgbp is an R package that provides estimates and verifiable confidence intervals for random effects in two-level conjugate hierarchical models for overdispersed Gaussian, Poisson, and binomial data. Rgbp models aggregate data from k independent groups summarized by observed sufficient statistics for each random effect, such as sample means, possibly with covariates. Rgbp uses approximate Bayesian machinery with unique improper priors for the hyper-parameters, which leads to good repeated sampling coverage properties for random effects. A special feature of Rgbp is an option that generates synthetic data sets to check whether the interval estimates for random effects actually meet the nominal confidence levels. Additionally, Rgbp provides inference statistics for the hyper-parameters, e.g., regression coefficients.

  1. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2017-10-01

    Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

  2. A discrete random effects probit model with application to the demand for preventive care.

    Science.gov (United States)

    Deb, P

    2001-07-01

    I have developed a random effects probit model in which the distribution of the random intercept is approximated by a discrete density. Monte Carlo results show that only three to four points of support are required for the discrete density to closely mimic normal and chi-squared densities and provide unbiased estimates of the structural parameters and the variance of the random intercept. The empirical application shows that both observed family characteristics and unobserved family-level heterogeneity are important determinants of the demand for preventive care. Copyright 2001 John Wiley & Sons, Ltd.

  3. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. On the Inference of Spatial Continuity using Spartan Random Field Models

    OpenAIRE

    Elogne, Samuel; Hristopulos, Dionisis

    2006-01-01

    This paper addresses the inference of spatial dependence in the context of a recently proposed framework. More specifically, the paper focuses on the estimation of model parameters for a class of generalized Gibbs random fields, i.e., Spartan Spatial Random Fields (SSRFs). The problem of parameter inference is based on the minimization of a distance metric. The latter involves a specifically designed distance between sample constraints (variance, generalized ``gradient'' and ``curvature'') an...

  5. Relationship between flux and concentration gradient of diffusive particles with the usage of random walk model

    Science.gov (United States)

    Ovchinnikov, M. N.

    2017-09-01

    The fundamental solutions of the diffusion equation for the local-equilibrium and nonlocal models are considered as the limiting cases of the solution of a problem related to consideration of the Brownian particles random walks. The differences between fundamental solutions, flows and concentration gradients were studied. The new modified non-local diffusion equation of the telegrapher type with correction function is suggested. It contains only microparameters of the random walk problem.

  6. Mechanistic modeling study on process optimization and precursor utilization with atmospheric spatial atomic layer deposition

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Zhang; He, Wenjie; Duan, Chenlong [State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Chen, Rong, E-mail: rongchen@mail.hust.edu.cn [State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Shan, Bin [State Key Laboratory of Material Processing and Die & Mould Technology, School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)

    2016-01-15

    Spatial atomic layer deposition (SALD) is a promising technology with the aim of combining the advantages of excellent uniformity and conformity of temporal atomic layer deposition (ALD), and an industrial scalable and continuous process. In this manuscript, an experimental and numerical combined model of atmospheric SALD system is presented. To establish the connection between the process parameters and the growth efficiency, a quantitative model on reactant isolation, throughput, and precursor utilization is performed based on the separation gas flow rate, carrier gas flow rate, and precursor mass fraction. The simulation results based on this model show an inverse relation between the precursor usage and the carrier gas flow rate. With the constant carrier gas flow, the relationship of precursor usage and precursor mass fraction follows monotonic function. The precursor concentration, regardless of gas velocity, is the determinant factor of the minimal residual time. The narrow gap between precursor injecting heads and the substrate surface in general SALD system leads to a low Péclet number. In this situation, the gas diffusion act as a leading role in the precursor transport in the small gap rather than the convection. Fluid kinetics from the numerical model is independent of the specific structure, which is instructive for the SALD geometry design as well as its process optimization.

  7. Dispersion modeling of accidental releases of toxic gases - utility for the fire brigades.

    Science.gov (United States)

    Stenzel, S.; Baumann-Stanzer, K.

    2009-09-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios”), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Viennese fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program of the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. For the purpose of our study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Trace (Safer System), Breeze (Trinity Consulting), SAM (Engineering office Lohmeyer). A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed, with the models above, in order to predict and estimate the human exposure during the event. Furthermore, the application of the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) in case of toxic release was

  8. Random regret minimization : Exploration of a new choice model for environmental and resource economics

    NARCIS (Netherlands)

    Thiene, M.; Boeri, M.; Chorus, C.G.

    2011-01-01

    This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the

  9. Random Regret Minimization for consumer choice modelling : Assessment of empirical evidence

    NARCIS (Netherlands)

    Chorus, C.G.; Dekker, T.

    2013-01-01

    This paper introduces to the field of marketing a regret-based discrete choice model for the analysis of multi-attribute consumer choices from multinomial choice sets. This random regret minimization model (RRM), which has two years ago been introduced in the field of transport, forms a regret-based

  10. P2 : A random effects model with covariates for directed graphs

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.

    A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.

  11. Simulation of random set models for unions of discs and the use of power tessellations

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, Katerina

    2009-01-01

    The power tessellation (or power diagram or Laguerre diagram) turns out to be particularly useful in connection to a flexible class of random set models specified by an underlying process of interacting discs. We discuss how to simulate these models and calculate various geometric characteristics...

  12. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  13. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter.

    Science.gov (United States)

    Huang, Lei

    2015-09-30

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required.

  14. Explaining regional variations in health care utilization between Swiss cantons using panel econometric models.

    Science.gov (United States)

    Camenzind, Paul A

    2012-03-13

    In spite of a detailed and nation-wide legislation frame, there exist large cantonal disparities in consumed quantities of health care services in Switzerland. In this study, the most important factors of influence causing these regional disparities are determined. The findings can also be productive for discussing the containment of health care consumption in other countries. Based on the literature, relevant factors that cause geographic disparities of quantities and costs in western health care systems are identified. Using a selected set of these factors, individual panel econometric models are calculated to explain the variation of the utilization in each of the six largest health care service groups (general practitioners, specialist doctors, hospital inpatient, hospital outpatient, medication, and nursing homes) in Swiss mandatory health insurance (MHI). The main data source is 'Datenpool santésuisse', a database of Swiss health insurers. For all six health care service groups, significant factors influencing the utilization frequency over time and across cantons are found. A greater supply of service providers tends to have strong interrelations with per capita consumption of MHI services. On the demand side, older populations and higher population densities represent the clearest driving factors. Strategies to contain consumption and costs in health care should include several elements. In the federalist Swiss system, the structure of regional health care supply seems to generate significant effects. However, the extent of driving factors on the demand side (e.g., social deprivation) or financing instruments (e.g., high deductibles) should also be considered.

  15. Utilizing evolutionary information and gene expression data for estimating gene networks with bayesian network models.

    Science.gov (United States)

    Tamada, Yoshinori; Bannai, Hideo; Imoto, Seiya; Katayama, Toshiaki; Kanehisa, Minoru; Miyano, Satoru

    2005-12-01

    Since microarray gene expression data do not contain sufficient information for estimating accurate gene networks, other biological information has been considered to improve the estimated networks. Recent studies have revealed that highly conserved proteins that exhibit similar expression patterns in different organisms, have almost the same function in each organism. Such conserved proteins are also known to play similar roles in terms of the regulation of genes. Therefore, this evolutionary information can be used to refine regulatory relationships among genes, which are estimated from gene expression data. We propose a statistical method for estimating gene networks from gene expression data by utilizing evolutionarily conserved relationships between genes. Our method simultaneously estimates two gene networks of two distinct organisms, with a Bayesian network model utilizing the evolutionary information so that gene expression data of one organism helps to estimate the gene network of the other. We show the effectiveness of the method through the analysis on Saccharomyces cerevisiae and Homo sapiens cell cycle gene expression data. Our method was successful in estimating gene networks that capture many known relationships as well as several unknown relationships which are likely to be novel. Supplementary information is available at http://bonsai.ims.u-tokyo.ac.jp/~tamada/bayesnet/.

  16. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    Science.gov (United States)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  17. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  18. Bak–Tang–Wiesenfeld model in the finite range random link lattice

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, M.N., E-mail: morteza.nattagh@gmail.com

    2014-06-13

    We consider the BTW model in random link lattices with finite range interaction (RLFRI). The degree distribution of nodes is considered to be uniform in the interval (0,n{sub 0}). We tune the topology of the lattices by two parameters (n{sub 0},R) in which R is the range of interactions. We numerically calculate the exponents of the statistical distribution functions in terms of these parameters. Dijkstra radius is utilized to calculate the fractal dimension of the avalanches. Our analysis shows that for a fixed n{sub 0} value there are two intervals of R, namely (1,R{sub 0}) and (R{sub 0},L) each of which has a distinct behavior. In the first interval the fractal dimension monotonically grows from D{sub f}(R=1)≃D{sub f}{sup BTW}=1.25, up to D{sub f}≃5.0±0.4. We found however that in the second interval there is a length scale r{sub 0}(n{sub 0},R) in which the behaviors are changed. For the scales smaller than r{sub 0}(n{sub 0},R), which is typically one decade, the fractal dimension is nearly independent of n{sub 0} and R and is nearly equal to 2.0±0.2. We retrieve the BTW-type behaviors in the limit R→1 and find some new behaviors in the random scaleless lattice limit, i.e. R→L. We also numerically calculate the explicit form of the number of unstable nodes (NUN) as a time-dependent process and show that for regular lattice, it is (up to a normalization) proportional to a one-dimensional Weiner process and for RLFRI it acquires a drift term. Our analytical analysis shows that the relaxation time (exit time) for NUN process for RLFRI is related to a fitting parameter of NUN and is shorter than the regular one.

  19. Accessing and Utilizing Remote Sensing Data for Vectorborne Infectious Diseases Surveillance and Modeling

    Science.gov (United States)

    Kiang, Richard; Adimi, Farida; Kempler, Steven

    2008-01-01

    Background: The transmission of vectorborne infectious diseases is often influenced by environmental, meteorological and climatic parameters, because the vector life cycle depends on these factors. For example, the geophysical parameters relevant to malaria transmission include precipitation, surface temperature, humidity, elevation, and vegetation type. Because these parameters are routinely measured by satellites, remote sensing is an important technological tool for predicting, preventing, and containing a number of vectorborne infectious diseases, such as malaria, dengue, West Nile virus, etc. Methods: A variety of NASA remote sensing data can be used for modeling vectorborne infectious disease transmission. We will discuss both the well known and less known remote sensing data, including Landsat, AVHRR (Advanced Very High Resolution Radiometer), MODIS (Moderate Resolution Imaging Spectroradiometer), TRMM (Tropical Rainfall Measuring Mission), ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), EO-1 (Earth Observing One) ALI (Advanced Land Imager), and SIESIP (Seasonal to Interannual Earth Science Information Partner) dataset. Giovanni is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center. It provides a simple and intuitive way to visualize, analyze, and access vast amounts of Earth science remote sensing data. After remote sensing data is obtained, a variety of techniques, including generalized linear models and artificial intelligence oriented methods, t 3 can be used to model the dependency of disease transmission on these parameters. Results: The processes of accessing, visualizing and utilizing precipitation data using Giovanni, and acquiring other data at additional websites are illustrated. Malaria incidence time series for some parts of Thailand and Indonesia are used to demonstrate that malaria incidences are reasonably well modeled with generalized linear models and artificial

  20. Utility of Modern Arthroscopic Simulator Training Models: A Meta-analysis and Updated Systematic Review.

    Science.gov (United States)

    Frank, Rachel M; Wang, Kevin C; Davey, Annabelle; Cotter, Eric J; Cole, Brian J; Romeo, Anthony A; Bush-Joseph, Charles A; Bach, Bernard R; Verma, Nikhil N

    2018-01-20

    To determine the utility of modern arthroscopic simulators in transferring skills learned on the model to the operating room. A meta-analysis and systematic review of all English-language studies relevant to validated arthroscopic simulation models using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines from 1999 to 2016 was performed. Data collected included the specific simulator model, the joint used, participant demographic characteristics, participant level of training, training session information, type and number of tasks, pre- and post-training assessments, and overall outcomes of simulator performance. Three independent reviewers analyzed all studies. Fifty-seven studies with 1,698 participants met the study criteria and were included. Of the studies, 25 (44%) incorporated an arthroscopic training program into the study methods whereas 32 (56%) did not. In 46 studies (81%), the studies' respective simulator models were used to assess arthroscopic performance, whereas 9 studies (16%) used Sawbones models, 8 (14%) used cadaveric models, and 4 (7%) evaluated subject performance on a live patient in the operating room. In 21 studies (37%), simulator performance was compared with experience level, with 20 of these (95%) showing that clinical experience correlated with simulator performance. In 25 studies (44%), task performance was evaluated before and after simulator training, with 24 of these (96%) showing improvement after training. All 4 studies that included live-patient arthroscopy reported improved operating room performance after simulator training compared with the performance of subjects not participating in a training program. This review suggests that (1) training on arthroscopic simulators improves performance on arthroscopic simulators and (2) performance on simulators for basic diagnostic arthroscopy correlates with experience level. Limited data suggest that simulator training can improve basic diagnostic

  1. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  2. A new model for mild blast injury utilizing Drosophila melanogaster - biomed 2013.

    Science.gov (United States)

    Hockey, K S; Hubbard, W B; Sajja, V S; Sholar, C A; Thorpe, C; Vandevord, P J; Rzigalinski, B A

    2013-01-01

    Current models for blast injury involve the use of mammalian species, which are costly and require extensive monitoring and housing, making it difficult to generate large numbers of injuries. The fruit fly, Drosophila melanogaster, has been utilized for many models of human disease including neurodegenerative disorders such as Parkinson’s and Alzheimer’s diseases. In this study, a model of blast injury was designed based on Drosophila, to provide a mechanism to investigate blast injury in large numbers and assess biochemical mechanisms of brain injury. Such studies may be used to identify specific pathways involved in blast-associated neurodegeneration, allowing more effective use of mammalian models. A custom-built blast wave simulator (ORA Inc.), comprised of a driver, test section, and wave eliminator, was used to create a blast wave. An acetate membrane was placed between the driver and the rectangular test section before compressed helium caused the membrane to rupture creating the blast wave. Membrane thickness correlates with the blast wave magnitude, which averaged 120 kPa for this experiment. Pressure sensors were inserted into the side of the tube in order to quantify the level of overpressure that the flies were exposed to. Five day old flies were held in a rectangular enclosed mesh fixture (10 flies per enclosure) which was placed in the center of the test section for blast delivery. Sham controls were exposed to same conditions with exception of blast. Lifespan and negative geotaxis, a measurement of motor function, was measured in flies after blast injury. Mild blast resulted in death of 28% of the flies. In surviving flies, motor function was initially reduced, but flies regained normal function by 8 days after injury. Although surviving flies regained normal motor function, flies subjected to mild blast died earlier than uninjured controls, with a 15.4% reduction in maximum lifespan and a 17% reduction in average lifespan, mimicking the scenario

  3. Mathematical model of a utility firm. Final technical report, Part I

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    Utility companies are in the predicament of having to make forecasts, and draw up plans for the future, in an increasingly fluid and volatile socio-economic environment. The project being reported is to contribute to an understanding of the economic and behavioral processes that take place within a firm, and without it. Three main topics are treated. One is the representation of the characteristics of the members of an organization, to the extent to which characteristics seem pertinent to the processes of interest. The second is the appropriate management of the processes of change by an organization. The third deals with the competitive striving towards an economic equilibrium among the members of a society in the large, on the theory that this process might be modeled in a way which is similar to the one for the intra-organizational ones. This volume covers mainly the first topic.

  4. In-House Communication Support System Based on the Information Propagation Model Utilizes Social Network

    Science.gov (United States)

    Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji

    Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.

  5. Utilization of a mental health collaborative care model among patients who require interpreter services.

    Science.gov (United States)

    Njeru, Jane W; DeJesus, Ramona S; St Sauver, Jennifer; Rutten, Lila J; Jacobson, Debra J; Wilson, Patrick; Wieland, Mark L

    2016-01-01

    Immigrants and refugees to the United States have a higher prevalence of depression compared to the general population and are less likely to receive adequate mental health services and treatment. Those with limited English proficiency (LEP) are at an even higher risk of inadequate mental health care. Collaborative care management (CCM) models for depression are effective in achieving treatment goals among a wide range of patient populations, including patients with LEP. The purpose of this study was to assess the utilization of a statewide initiative that uses CCM for depression management, among patients with LEP in a large primary care practice. This was a retrospective cohort study of patients with depression in a large primary care practice in Minnesota. Patients who met criteria for enrollment into the CCM [with a provider-generated diagnosis of depression or dysthymia in the electronic medical records, and a Patient Health Questionnaire-9 (PHQ-9) score ≥10]. Patient-identified need for interpreter services was used as a proxy for LEP. Rates of enrollment into the DIAMOND (Depression Improvement Across Minnesota, Offering A New Direction) program, a statewide initiative that uses CCM for depression management were measured. These rates were compared between eligible patients who require interpreter services versus patients who do not. Of the 7561 patients who met criteria for enrollment into the DIAMOND program during the study interval, 3511 were enrolled. Only 18.2 % of the eligible patients with LEP were enrolled into DIAMOND compared with the 47.2 % of the eligible English proficient patients. This finding persisted after adjustment for differences in age, gender and depression severity scores (adjusted OR [95 % confidence interval] = 0.43 [0.23, 0.81]). Within primary care practices, tailored interventions are needed, including those that address cultural competence and language navigation, to improve the utilization of this effective model among

  6. Modeling and optimization of processes for clean and efficient pulverized coal combustion in utility boilers

    Directory of Open Access Journals (Sweden)

    Belošević Srđan V.

    2016-01-01

    Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in

  7. A note on Using regression models to analyze randomized trials: asymptotically valid hypothesis tests despite incorrectly specified models.

    Science.gov (United States)

    Kim, Jane Paik

    2013-03-01

    In the context of randomized trials, Rosenblum and van der Laan (2009, Biometrics 63, 937-945) considered the null hypothesis of no treatment effect on the mean outcome within strata of baseline variables. They showed that hypothesis tests based on linear regression models and generalized linear regression models are guaranteed to have asymptotically correct Type I error regardless of the actual data generating distribution, assuming the treatment assignment is independent of covariates. We consider another important outcome in randomized trials, the time from randomization until failure, and the null hypothesis of no treatment effect on the survivor function conditional on a set of baseline variables. By a direct application of arguments in Rosenblum and van der Laan (2009), we show that hypothesis tests based on multiplicative hazards models with an exponential link, i.e., proportional hazards models, and multiplicative hazards models with linear link functions where the baseline hazard is parameterized, are asymptotically valid under model misspecification provided that the censoring distribution is independent of the treatment assignment given the covariates. In the case of the Cox model and linear link model with unspecified baseline hazard function, the arguments in Rosenblum and van der Laan (2009) cannot be applied to show the robustness of a misspecified model. Instead, we adopt an approach used in previous literature (Struthers and Kalbfleisch, 1986, Biometrika 73, 363-369) to show that hypothesis tests based on these models, including models with interaction terms, have correct type I error. Copyright © 2013, The International Biometric Society.

  8. Characterizing fundamental frequency in Mandarin: a functional principal component approach utilizing mixed effect models.

    Science.gov (United States)

    Hadjipantelis, Pantelis Z; Aston, John A D; Evans, Jonathan P

    2012-06-01

    A model for fundamental frequency (F0, or commonly pitch) employing a functional principal component (FPC) analysis framework is presented. The model is applied to Mandarin Chinese; this Sino-Tibetan language is rich in pitch-related information as the relative pitch curve is specified for most syllables in the lexicon. The approach yields a quantification of the influence carried by each identified component in relation to original tonal content, without formulating any assumptions on the shape of the tonal components. The original five speaker corpus is preprocessed using a locally weighted least squares smoother to produce F0 curves. These smoothed curves are then utilized as input for the computation of FPC scores and their corresponding eigenfunctions. These scores are analyzed in a series of penalized mixed effect models, through which meaningful categorical prototypes are built. The prototypes appear to confirm known tonal characteristics of the language, as well as suggest the presence of a sinusoid tonal component that is previously undocumented.

  9. Random aggregation models for the formation and evolution of coding and non-coding DNA

    Science.gov (United States)

    Provata, A.

    A random aggregation model with influx is proposed for the formation of the non-coding DNA regions via random co-aggregation and influx of biological macromolecules such as viruses, parasite DNA, and replication segments. The constant mixing (transpositions) and influx drives the system in an out-of-equilibrium steady state characterised by a power law size distribution. The model predicts the long range distributions found in the noncoding eucaryotic DNA and explains the observed correlations. For the formation of coding DNA a random closed aggregation model is proposed which predicts short range coding size distributions. The closed aggregation process drives the system in an almost “frozen” stable state which is robust to external perturbations and which is characterised by well defined space and time scales, as observed in coding sequences.

  10. Object-oriented Markov random model for classification of high resolution satellite imagery based on wavelet transform

    Science.gov (United States)

    Hong, Liang; Liu, Cun; Yang, Kun; Deng, Ming

    2013-07-01

    The high resolution satellite imagery (HRSI) have higher spatial resolution and less spectrum number, so there are some "object with different spectra, different objects with same spectrum" phenomena. The objective of this paper is to utilize the extracted features of high resolution satellite imagery (HRSI) obtained by the wavelet transform(WT) for segmentation. WT provides the spatial and spectral characteristics of a pixel along with its neighbors. The object-oriented Markov random Model in the wavelet domain is proposed in order to segment high resolution satellite imagery (HRSI). The proposed method is made up of three blocks: (1) WT-based feature extrcation.the aim of extraction of feature using WT for original spectral bands is to exploit the spatial and frequency information of the pixels; (2) over-segmentation object generation. Mean-Shift algorithm is employed to obtain over-segmentation objects; (3) classification based on Object-oriented Markov Random Model. Firstly the object adjacent graph (OAG) can be constructed on the over-segmentation objects. Secondly MRF model is easily defined on the OAG, in which WT-based feature of pixels are modeled in the feature field model and the neighbor system, potential cliques and energy functions of OAG are exploited in the labeling model. Experiments are conducted on one HRSI dataset-QuickBird images. We evaluate and compare the proposed approach with the well-known commercial software eCognition(object-based analysis approach) and Maximum Likelihood(ML) based pixels. Experimental results show that the proposed the method in this paper obviously outperforms the other methods.

  11. Decision Support for Test Trench Location Selection with 3D Semantic Subsurface Utility Models

    NARCIS (Netherlands)

    Racz, Paulina; Syfuss, Lars; Schultz, Carl; van Buiten, Marinus; olde Scholtenhuis, Léon Luc; Vahdatikhaki, Faridaddin; Doree, Andries G.; Lin, Ken-Yu; El-Gohary, Nora; Tang, Pingbo

    Subsurface utility construction work often involves repositioning of, and working between, existing buried networks. While the amount of utilities in modern cities grows, excavation work becomes more prone to incidents. To prevent such incidents, excavation workers request existing 2D utility maps,

  12. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  13. Modelling of landfill gas adsorption with bottom ash for utilization of renewable energy

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Chen

    2011-10-06

    Energy crisis, environment pollution and climate change are the serious challenges to people worldwide. In the 21st century, human being is trend to research new technology of renewable energy, so as to slow down global warming and develop society in an environmentally sustainable method. Landfill gas, produced by biodegradable municipal solid waste in landfill, is a renewable energy source. In this work, landfill gas utilization for energy generation is introduced. Landfill gas is able to produce hydrogen by steam reforming reactions. There is a steam reformer equipment in the fuel cells system. A sewage plant of Cologne in Germany has run the Phosphoric Acid Fuel Cells power station with biogas for more than 50,000 hours successfully. Landfill gas thus may be used as fuel for electricity generation via fuel cells system. For the purpose of explaining the possibility of landfill gas utilization via fuel cells, the thermodynamics of landfill gas steam reforming are discussed by simulations. In practice, the methane-riched gas can be obtained by landfill gas purification and upgrading. This work investigate a new method for upgrading-landfill gas adsorption with bottom ash experimentally. Bottom ash is a by-product of municipal solid waste incineration, some of its physical and chemical properties are analysed in this work. The landfill gas adsorption experimental data show bottom ash can be used as a potential adsorbent for landfill gas adsorption to remove CO{sub 2}. In addition, the alkalinity of bottom ash eluate can be reduced in these adsorption processes. Therefore, the interactions between landfill gas and bottom ash can be explained by series reactions accordingly. Furthermore, a conceptual model involving landfill gas adsorption with bottom ash is developed. In this thesis, the parameters of landfill gas adsorption equilibrium equations can be obtained by fitting experimental data. On the other hand, these functions can be deduced with theoretical approach

  14. Elastic properties of model random three-dimensional open-cell solids

    Science.gov (United States)

    Roberts, A. P.; Garboczi, E. J.

    2002-01-01

    Most cellular solids are random materials, while practically all theoretical structure-property relations are for periodic models. To generate theoretical results for random models the finite element method (FEM) was used to study the elastic properties of open-cell solids. We have computed the density ( ρ) and microstructure dependence of the Young's modulus ( E) and Poisson's ratio ( ν) for four different isotropic random models. The models were based on Voronoi tessellations, level-cut Gaussian random fields, and nearest neighbour node-bond rules. These models were chosen to broadly represent the structure of foamed solids and other (non-foamed) cellular materials. At low densities, the Young's modulus can be described by the relation E∝ ρn. The exponent n and constant of proportionality depend on microstructure. We find 1.3common model of foams, became approximately incompressible ( ν≈0.5). This behaviour is not commonly observed experimentally. Our studies showed the result was robust to polydispersity and that a relatively large number (15%) of the bonds must be broken to significantly reduce the low-density Poission's ratio to ν≈0.33.

  15. A Review of 21st Century Utility of a Biopsychosocial Model in United States Medical School Education.

    Science.gov (United States)

    Jaini, Paresh Atu; Lee, Jenny Seung-Hyun

    2015-09-01

    Current medical practice is grounded in a biomedical model that fails to effectively address multifaceted lifestyle and morbidogenic environmental components that are the root causes of contemporary chronic diseases. Utilizing the biopsychosocial (BPS) model in medical school training may produce competent healthcare providers to meet the challenge of rising chronic illnesses that are a result of these factors. This study explored the current trend of research on the utility of the BPS model in medical education and examined medical school curricula that have explicitly adopted the BPS model. A systematic review of peer-reviewed literature was conducted on the BPS model and medical education since the 1970s using multiple databases. Descriptive analysis was used to illustrate findings regarding the trends of the BPS model in medical education and its utility in specific medical schools in the United States. Major findings illustrated a growing trend in research on the BPS model in medical education since the 1970s with literature in this area most visible since 2000. The same trend was established for the incorporation of psychosocial or behavioral and social science components in medical education. From our peer-reviewed literature search, only 5 medical schools featured utility of the BPS model in their curricula utilizing variable educational processes. Although literature regarding the BPS model in medical education is growing, the explicit utility of the BPS model in medical school is limited. Our findings can stimulate educational processes and research endeavors to advance medical education and medical practice to ensure that future doctors can meet the challenge of rising lifestyle and environmental associated illnesses.

  16. Randomized Controlled Trial of Electronic Care Plan Alerts and Resource Utilization by High Frequency Emergency Department Users with Opioid Use Disorder

    Directory of Open Access Journals (Sweden)

    Niels Rathlev, MD

    2016-01-01

    Full Text Available Introduction: There is a paucity of literature supporting the use of electronic alerts for patients with high frequency emergency department (ED use. We sought to measure changes in opioid prescribing and administration practices, total charges and other resource utilization using electronic alerts to notify providers of an opioid-use care plan for high frequency ED patients. Methods: This was a randomized, non-blinded, two-group parallel design study of patients who had 1 opioid use disorder and 2 high frequency ED use. Three affiliated hospitals with identical electronic health records participated. Patients were randomized into “Care Plan” versus “Usual Care groups”. Between the years before and after randomization, we compared as primary outcomes the following: 1 opioids (morphine mg equivalents prescribed to patients upon discharge and administered to ED and inpatients; 2 total medical charges, and the numbers of; 3 ED visits, 4 ED visits with advanced radiologic imaging (computed tomography [CT] or magnetic resonance imaging [MRI] studies, and 5 inpatient admissions. Results: A total of 40 patients were enrolled. For ED and inpatients in the “Usual Care” group, the proportion of morphine mg equivalents received in the post-period compared with the pre-period was 15.7%, while in the “Care Plan” group the proportion received in the post-period compared with the pre-period was 4.5% (ratio=0.29, 95% CI [0.07-1.12]; p=0.07. For discharged patients in the “Usual Care” group, the proportion of morphine mg equivalents prescribed in the post-period compared with the pre-period was 25.7% while in the “Care Plan” group, the proportion prescribed in the post-period compared to the pre-period was 2.9%. The “Care Plan” group showed an 89% greater proportional change over the periods compared with the “Usual Care” group (ratio=0.11, 95% CI [0.01-0.092]; p=0.04. Care plans did not change the total charges, or, the numbers

  17. Bayesian phase II adaptive randomization by jointly modeling time-to-event efficacy and binary toxicity.

    Science.gov (United States)

    Lei, Xiudong; Yuan, Ying; Yin, Guosheng

    2011-01-01

    In oncology, toxicity is typically observable shortly after a chemotherapy treatment, whereas efficacy, often characterized by tumor shrinkage, is observable after a relatively long period of time. In a phase II clinical trial design, we propose a Bayesian adaptive randomization procedure that accounts for both efficacy and toxicity outcomes. We model efficacy as a time-to-event endpoint and toxicity as a binary endpoint, sharing common random effects in order to induce dependence between the bivariate outcomes. More generally, we allow the randomization probability to depend on patients' specific covariates, such as prognostic factors. Early stopping boundaries are constructed for toxicity and futility, and a superior treatment arm is recommended at the end of the trial. Following the setup of a recent renal cancer clinical trial at M. D. Anderson Cancer Center, we conduct extensive simulation studies under various scenarios to investigate the performance of the proposed method, and compare it with available Bayesian adaptive randomization procedures.

  18. Phase structure of the O(n) model on a random lattice for n > 2

    DEFF Research Database (Denmark)

    Durhuus, B.; Kristjansen, C.

    1997-01-01

    We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....

  19. Real-world utilization of once-daily extended-release abuse deterrent formulation of hydrocodone: a comparison with the pre-approval randomized clinical trials

    Directory of Open Access Journals (Sweden)

    Taber L

    2017-07-01

    Full Text Available Louise Taber,1 T Christopher Bond,2 Xuezhe Wang,2 Aditi Kadakia,2 Tracy J Mayne2 1Arizona Research Center, Phoenix, AZ, USA; 2Purdue Pharma L.P., Stamford, CT, USA Background and objective: Hydrocodone bitartrate extended release (Hysingla® ER, HYD was previously studied in a 12-week randomized, double-blind, placebo-controlled trial and a 52-week open-label safety study. Both of these preapproval studies allowed dose titration to efficacy. The purpose of the present analysis was to compare dosing and utilization patterns in these previous clinical trials with real-world data (RWD usage in a retrospective claim analysis performed 12–14 months post approval in the US.Methods: In the claim analysis (Truven Health Analytics MarketScan® Research Database, patients prescribed HYD between January 1, 2015, and April 30, 2016, were followed for up to 6 months of continuous HYD use. Daily average consumption (DACON, initial dose, rescue opioid use and total milligram dose over time were also evaluated.Results: HYD daily dose stabilized at ~60 mg dose once daily across all three studies. There was also a reduced need for rescue medication with HYD, resulting in a lower total opioid milligram dose over time. In the claim analysis, the mean monthly HYD dose increased from 49 to 55 mg in month 2 and then remained stable through month 6. The mean (standard deviation [SD] time on drug was 79.5 days (61.42 days, and DACON was 1.04 pills/day, corresponding to the approved full prescribing information (FPI and once-daily dosing.Conclusion: In 12–14 months post approval, real-world dosing and utilization of HYD mirrored registration and open-label study findings, with stable once-daily dosing of ~60 mg and no increase in rescue medicine utilization. Keywords: chronic low back pain, hydrocodone, bitartrate extended release, opioid, Hysingla ER, NCT01452529, NCT01400139 

  20. Cost-utility analysis of the EVOLVO study on remote monitoring for heart failure patients with implantable defibrillators: randomized controlled trial.

    Science.gov (United States)

    Zanaboni, Paolo; Landolina, Maurizio; Marzegalli, Maurizio; Lunati, Maurizio; Perego, Giovanni B; Guenzati, Giuseppe; Curnis, Antonio; Valsecchi, Sergio; Borghetti, Francesca; Borghi, Gabriella; Masella, Cristina

    2013-05-30

    Heart failure patients with implantable defibrillators place a significant burden on health care systems. Remote monitoring allows assessment of device function and heart failure parameters, and may represent a safe, effective, and cost-saving method compared to conventional in-office follow-up. We hypothesized that remote device monitoring represents a cost-effective approach. This paper summarizes the economic evaluation of the Evolution of Management Strategies of Heart Failure Patients With Implantable Defibrillators (EVOLVO) study, a multicenter clinical trial aimed at measuring the benefits of remote monitoring for heart failure patients with implantable defibrillators. Two hundred patients implanted with a wireless transmission-enabled implantable defibrillator were randomized to receive either remote monitoring or the conventional method of in-person evaluations. Patients were followed for 16 months with a protocol of scheduled in-office and remote follow-ups. The economic evaluation of the intervention was conducted from the perspectives of the health care system and the patient. A cost-utility analysis was performed to measure whether the intervention was cost-effective in terms of cost per quality-adjusted life year (QALY) gained. Overall, remote monitoring did not show significant annual cost savings for the health care system (€1962.78 versus €2130.01; P=.80). There was a significant reduction of the annual cost for the patients in the remote arm in comparison to the standard arm (€291.36 versus €381.34; P=.01). Cost-utility analysis was performed for 180 patients for whom QALYs were available. The patients in the remote arm gained 0.065 QALYs more than those in the standard arm over 16 months, with a cost savings of €888.10 per patient. Results from the cost-utility analysis of the EVOLVO study show that remote monitoring is a cost-effective and dominant solution. Remote management of heart failure patients with implantable defibrillators

  1. Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental.

    Science.gov (United States)

    Lidauer, M H; Emmerling, R; Mäntysaari, E A

    2008-06-01

    A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.

  2. An Exploration of the Effects of Maintenance Manning on Combat Mission Readiness (CMR) Utilizing Agent Based Modeling

    Science.gov (United States)

    2010-03-01

    an agent-based modeling environment called NetLogo , developed at Northwestern University (Wilensky, 1999). The development environment was selected...random numbers as a variance reduction technique during the analysis period. The 14 NetLogo system did not have the capability to track more...upon coordination with AFIT). The random number generator used in the NetLogo environment is the Mersenne twister, proposed by Matsumoto and

  3. Improved plasma glucose control, whole-body glucose utilization, and lipid profile on a low-glycemic index diet in type 2 diabetic men: a randomized controlled trial.

    Science.gov (United States)

    Rizkalla, Salwa W; Taghrid, Laika; Laromiguiere, Muriel; Huet, Dorothée; Boillot, Josette; Rigoir, Aude; Elgrably, Fabienne; Slama, Gerard

    2004-08-01

    To determine whether a chronic low-glycemic index (LGI) diet, compared with a high-glycemic index (HGI) diet, has beneficial effects on plasma glucose control, lipid metabolism, total fat mass, and insulin resistance in type 2 diabetic patients. Twelve type 2 diabetic men were randomly allocated to two periods of 4 weeks of an LGI or HGI carbohydrate diet separated by a 4-week washout interval, in a crossover design. The LGI diet induced lower postprandial plasma glucose and insulin profiles and areas under the curve than after the HGI diet. At the end of the two dietary periods, the 7-day dietary records demonstrated equal daily total energy and macronutrient intake. Body weight and total fat mass were comparable. Four-week LGI versus HGI diet induced improvement of fasting plasma glucose (P glycemic control, glucose utilization, some lipid profiles, and the capacity for fibrinolysis in type 2 diabetes. Even if changes in glycemic control were modest during the 4-week period, the use of an LGI diet in a longer-term manner might play an important role in the treatment and prevention of diabetes and related disorders.

  4. Limited clinical utility of genotype-guided warfarin initiation dosing algorithms versus standard therapy: a meta-analysis and trial sequential analysis of 11 randomized controlled trials.

    Science.gov (United States)

    Tang, H L; Shi, W L; Li, X G; Zhang, T; Zhai, S D; Xie, H G

    2015-12-01

    In terms of inconsistent conclusions across all relevant randomized controlled trials (RCTs) and available meta-analyses, we aimed to use a meta-analysis and trial sequential analysis (TSA) to evaluate whether clinical utility of a genotype-guided warfarin initiation dosing algorithm could be better than that of a standard therapy regimen, and whether currently relevant evidence could be reliable and conclusive. Overall, 11 eligible RCTs involving 2677 patients were included for further analyses. Compared with fixed dose or clinically adjusted warfarin initiation dosing regimens, genotype-guided algorithms significantly increased time in therapeutic range, shortened time to first therapeutic international normalized ratio (INR) and time to stable doses, but did not show any marked improvements in excessive anticoagulation, bleeding events, thromboembolism, or all-cause mortality. Subgroup analyses revealed that, genotype-guided algorithms showed better control in the outcomes of time in therapeutic range or excessive anticoagulation than fixed-dose regimens rather than clinically adjusted regimens. Except for excessive anticoagulation, currently available evidence of all other outcomes was unreliable and inconclusive as determined with TSA. Our findings suggest that genotype-guided warfarin initiation dosing algorithms have superiority in the improvement of surrogate quality markers for anticoagulation control, but that this does not translate into statistically significant differences in clinical outcomes, which is largely because of the insufficient sample size in the RCTs analyzed.

  5. Distance learning strategies for weight management utilizing social media: A comparison of phone conference call versus social media platform. Rationale and design for a randomized study.

    Science.gov (United States)

    Willis, Erik A; Szabo-Reed, Amanda N; Ptomey, Lauren T; Steger, Felicia L; Honas, Jeffery J; Al-Hihi, Eyad M; Lee, Robert; Vansaghi, Lisa; Washburn, Richard A; Donnelly, Joseph E

    2016-03-01

    Management of obesity in the context of the primary care physician visit is of limited efficacy in part because of limited ability to engage participants in sustained behavior change between physician visits. Therefore, healthcare systems must find methods to address obesity that reach beyond the walls of clinics and hospitals and address the issues of lifestyle modification in a cost-conscious way. The dramatic increase in technology and online social networks may present healthcare providers with innovative ways to deliver weight management programs that could have an impact on health care at the population level. A randomized study will be conducted on 70 obese adults (BMI 30.0-45.0 kg/m(2)) to determine if weight loss (6 months) is equivalent between weight management interventions utilizing behavioral strategies by either a conference call or social media approach. The primary outcome, body weight, will be assessed at baseline and 6 months. Secondary outcomes including waist circumference, energy and macronutrient intake, and physical activity will be assessed on the same schedule. In addition, a cost analysis and process evaluation will be completed. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. The impact of brief messages on HSV-2 screening uptake among female defendants in a court setting: a randomized controlled trial utilizing prospect theory.

    Science.gov (United States)

    Roth, Alexis M; Van Der Pol, Barbara; Fortenberry, J Dennis; Dodge, Brian; Reece, Michael; Certo, David; Zimet, Gregory D

    2015-01-01

    Epidemiologic data demonstrate that women involved with the criminal justice system in the United States are at high risk for sexually transmitted infections, including herpes simplex virus type 2 (HSV-2). Female defendants were recruited from a misdemeanor court to assess whether brief framed messages utilizing prospect theory could encourage testing for HSV-2. Participants were randomly assigned to a message condition (gain, loss, or control), completed an interviewer-administered survey assessing factors associated with antibody test uptake/refusal and were offered free point-of-care HSV-2 serologic testing. Although individuals in the loss-frame group accepted testing at the highest rate, an overall statistical difference in HSV-2 testing behavior by group (p ≤ .43) was not detected. The majority of the sample (74.6%) characterized receiving a serological test for HSV-2 as health affirming. However, this did not moderate the effect of the intervention nor was it significantly associated with test acceptance (p ≤ .82). Although the effects of message framing are subtle, the findings have important theoretical implications given the participants' characterization of HSV-2 screening as health affirming despite being a detection behavior. Implications of study results for health care providers interested in brief, low cost interventions are also explored.

  7. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  8. Time Gain Needed for In-Ambulance Telemedicine: Cost-Utility Model.

    Science.gov (United States)

    Valenzuela Espinoza, Alexis; Devos, Stefanie; van Hooff, Robbert-Jan; Fobelets, Maaike; Dupont, Alain; Moens, Maarten; Hubloue, Ives; Lauwaert, Door; Cornu, Pieter; Brouns, Raf; Putman, Koen

    2017-11-24

    Stroke is a very time-sensitive pathology, and many new solutions target the optimization of prehospital stroke care to improve the stroke management process. In-ambulance telemedicine, defined by live bidirectional audio-video between a patient and a neurologist in a moving ambulance and the automated transfer of vital parameters, is a promising new approach to speed up and improve the quality of acute stroke care. Currently, no evidence exists on the cost effectiveness of in-ambulance telemedicine. We aim to develop a first cost effectiveness model for in-ambulance telemedicine and use this model to estimate the time savings needed before in-ambulance telemedicine becomes cost effective. Current standard stroke care is compared with current standard stroke care supplemented with in-ambulance telemedicine using a cost-utility model measuring costs and quality-adjusted life-years (QALYs) from a health care perspective. We combine a decision tree with a Markov model. Data from the UZ Brussel Stroke Registry (2282 stroke patients) and linked hospital claims data at individual level are combined with literature data to populate the model. A 2-way sensitivity analysis varying both implementation costs and time gain is performed to map the different cost-effective combinations and identify the time gain needed for cost effectiveness and dominance. For several modeled time gains, the cost-effectiveness acceptability curve is calculated and mapped in 1 figure. Under the base-case scenario (implementation cost of US $159,425) and taking a lifetime horizon into account, in-ambulance telemedicine is a cost-effective strategy compared to standard stroke care alone starting from a time gain of 6 minutes. After 12 minutes, in-ambulance telemedicine becomes dominant, and this results in a mean decrease of costs by US -$30 (95% CI -$32 to -$29) per patient with 0.00456 (95% CI 0.00448 to 0.00463) QALYs on average gained per patient. In over 82% of all probabilistic simulations

  9. Cost-Utility Analysis of Bariatric Surgery in Italy: Results of Decision-Analytic Modelling

    Directory of Open Access Journals (Sweden)

    Marcello Lucchese

    2017-06-01

    Full Text Available Objective: To evaluate the cost-effectiveness of bariatric surgery in Italy from a third-party payer perspective over a medium-term (10 years and a long-term (lifetime horizon. Methods: A state-transition Markov model was developed, in which patients may experience surgery, post-surgery complications, diabetes mellitus type 2, cardiovascular diseases or die. Transition probabilities, costs, and utilities were obtained from the Italian and international literature. Three types of surgeries were considered: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. A base-case analysis was performed for the population, the characteristics of which were obtained from surgery candidates in Italy. Results: In the base-case analysis, over 10 years, bariatric surgery led to cost increment of EUR 2,661 and generated additional 1.1 quality-adjusted life years (QALYs. Over a lifetime, surgery led to savings of EUR 8,649, additional 0.5 life years and 3.2 QALYs. Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of EUR 2,412/QALY and dominant over conservative management over a lifetime. Conclusion: In a comprehensive decision analytic model, a current mix of surgical methods for bariatric surgery was cost-effective at 10 years and cost-saving over the lifetime of the Italian patient cohort considered in this analysis.

  10. Cost-Utility Analysis of Bariatric Surgery in Italy: Results of Decision-Analytic Modelling.

    Science.gov (United States)

    Lucchese, Marcello; Borisenko, Oleg; Mantovani, Lorenzo Giovanni; Cortesi, Paolo Angelo; Cesana, Giancarlo; Adam, Daniel; Burdukova, Elisabeth; Lukyanov, Vasily; Di Lorenzo, Nicola

    2017-01-01

    To evaluate the cost-effectiveness of bariatric surgery in Italy from a third-party payer perspective over a medium-term (10 years) and a long-term (lifetime) horizon. A state-transition Markov model was developed, in which patients may experience surgery, post-surgery complications, diabetes mellitus type 2, cardiovascular diseases or die. Transition probabilities, costs, and utilities were obtained from the Italian and international literature. Three types of surgeries were considered: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. A base-case analysis was performed for the population, the characteristics of which were obtained from surgery candidates in Italy. In the base-case analysis, over 10 years, bariatric surgery led to cost increment of EUR 2,661 and generated additional 1.1 quality-adjusted life years (QALYs). Over a lifetime, surgery led to savings of EUR 8,649, additional 0.5 life years and 3.2 QALYs. Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of EUR 2,412/QALY and dominant over conservative management over a lifetime. In a comprehensive decision analytic model, a current mix of surgical methods for bariatric surgery was cost-effective at 10 years and cost-saving over the lifetime of the Italian patient cohort considered in this analysis. © 2017 The Author(s) Published by S. Karger GmbH, Freiburg.

  11. Optimal energy-utilization ratio for long-distance cruising of a model fish

    Science.gov (United States)

    Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang

    2012-07-01

    The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.

  12. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  13. Women's preferences for cardiac rehabilitation program model: a randomized controlled trial.

    Science.gov (United States)

    Andraos, Christine; Arthur, Heather M; Oh, Paul; Chessex, Caroline; Brister, Stephanie; Grace, Sherry L

    2015-12-01

    Although cardiac rehabilitation (CR) is effective, women often report programs do not meet their needs. Innovative models have been developed that may better suit women. The objectives of the study were to describe: (1) adherence to CR model allocation; (2) satisfaction by model attended; and (3) CR preferences. Tertiary objectives from a randomized controlled trial of female patients randomized to mixed-sex, women-only, or home-based CR were tested. Patients were recruited from six hospitals. Consenting participants were asked to complete a survey and undertook a CR intake assessment. Eligible patients were randomized. Participants were mailed a follow-up survey six months later. Adherence to model allocation was ascertained from CR charts. Overall 169 (18.6%) patients were randomized, of which 116 (68.6%) completed the post-test survey. Forty-five (26.6%) participants did not receive the allocated model, with those referred to home-based CR least likely to attend the allocated model (n = 25; 45.4%). Semi-structured interviews revealed participants also often switched from women-only to mixed-sex CR due to time conflicts. Satisfaction was high across all models (mean = 4.23 ± 1.16/5; p = 0.85) but participants in the women-only program felt significantly more comfortable in their workout attire (p = 0.003) and perceived the environment as less competitive (p = 0.02). Patients equally preferred mixed-sex (n = 44, 41.9%) and women-only (n = 44, 41.9%) CR, over home-based (n = 17, 16.2%), with patients preferring the model they attended. Females were highly satisfied regardless of CR model attended but preferred supervised programs most. Patient preference and session timing should be considered in program model allocation decisions. © The European Society of Cardiology 2014.

  14. Knowledge Translation for Research Utilization: Design of a Knowledge Translation Model at Tehran University of Medical Sciences

    Science.gov (United States)

    Majdzadeh, Reza; Sadighi, Jila; Nejat, Saharnaz; Mahani, Ali Shahidzade; Gholami, Jaleh

    2008-01-01

    Introduction: The present study aimed to generate a model that would provide a conceptual framework for linking disparate components of knowledge translation. A theoretical model of such would enable the organization and evaluation of attempts to analyze current conditions and to design interventions on the transfer and utilization of research…

  15. A comparison of various approaches to the exponential random graph model : A reanalysis of 102 student networks in school classes

    NARCIS (Netherlands)

    Lubbers, Miranda J.; Snijders, Tom A. B.

    2007-01-01

    This paper describes an empirical comparison of four specifications of the exponential family of random graph models (ERGM), distinguished by model specification (dyadic independence, Markov, partial conditional dependence) and, for the Markov model, by estimation method (Maximum Pseudolikelihood,

  16. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated....... It should, however, be noted that the level of state dependence is considerably larger for households with single car ownership as compared with multiple car ownership. This suggests that the holding of a second car will be more affected by changes in the socioeconomic conditions of the household...

  17. Utilizing Traveler Demand Modeling to Predict Future Commercial Flight Schedules in the NAS

    Science.gov (United States)

    Viken, Jeff; Dollyhigh, Samuel; Smith, Jeremy; Trani, Antonio; Baik, Hojong; Hinze, Nicholas; Ashiabor, Senanu

    2006-01-01

    The current work incorporates the Transportation Systems Analysis Model (TSAM) to predict the future demand for airline travel. TSAM is a multi-mode, national model that predicts the demand for all long distance travel at a county level based upon population and demographics. The model conducts a mode choice analysis to compute the demand for commercial airline travel based upon the traveler s purpose of the trip, value of time, cost and time of the trip,. The county demand for airline travel is then aggregated (or distributed) to the airport level, and the enplanement demand at commercial airports is modeled. With the growth in flight demand, and utilizing current airline flight schedules, the Fratar algorithm is used to develop future flight schedules in the NAS. The projected flights can then be flown through air transportation simulators to quantify the ability of the NAS to meet future demand. A major strength of the TSAM analysis is that scenario planning can be conducted to quantify capacity requirements at individual airports, based upon different future scenarios. Different demographic scenarios can be analyzed to model the demand sensitivity to them. Also, it is fairly well know, but not well modeled at the airport level, that the demand for travel is highly dependent on the cost of travel, or the fare yield of the airline industry. The FAA projects the fare yield (in constant year dollars) to keep decreasing into the future. The magnitude and/or direction of these projections can be suspect in light of the general lack of airline profits and the large rises in airline fuel cost. Also, changes in travel time and convenience have an influence on the demand for air travel, especially for business travel. Future planners cannot easily conduct sensitivity studies of future demand with the FAA TAF data, nor with the Boeing or Airbus projections. In TSAM many factors can be parameterized and various demand sensitivities can be predicted for future travel. These

  18. Nitrogen Species in the Post-Pinatubo Stratosphere: Model Analysis Utilizing UARS Measurements. Appendix F

    Science.gov (United States)

    Danilin, Michael Y.; Rodriguez, Jose M.; Hu, Wenjie; Ko, Malcolm K. W.; Weisenstein, Debra K.; Kumer, John B.; Mergenthaler, John L.; Russell, James M., III; Koike, Makoto; Yue, Glenn K.

    1999-01-01

    We present an analysis of the impact of heterogeneous chemistry on the partitioning of nitrogen species measured by the Upper Atmosphere Research Satellite (UARS) instruments. The UARS measurements utilized include N2O, HNO3, and ClONO2 from the cryogenic limb array etalon spectrometer (CLAES), version 7 (v.7), and temperature, methane, ozone, H2O, HCl, NO and NO2 from the halogen occultation experiment (HALOE), version 18. The analysis is carried out for the UARS data obtained between January 1992 and September 1994 in the 100- to 1-mbar (approx. 17-47 km) altitude range and over 10 deg latitude bins from 70 deg S to 70 deg N. The spatiotemporal evolution of aerosol surface area density (SAD) is adopted from analysis of the Stratospheric Aerosol and Gas Experiment (SAGE) II data. A diurnal steady state photochemical box model, constrained by the temperature, ozone, H2O, CH4, aerosol SAD, and columns of O2 and O3 above the point of interest, has been used as the main tool to analyze these data. Total inorganic nitrogen (NOY) is obtained by three different methods: (1) as a sum of the UARS-measured NO, NO2, HNO3, and ClONO2; (2) from the N2O-NOY correlation; and (3) from the CH4-NOY correlation. To validate our current understanding of stratospheric heterogeneous chemistry for post-Pinatubo conditions, the model-calculated monthly averaged NO(x)/NO(y) ratios and the NO, NO2, and HNO3 profiles are compared with the UARS-derived data. In general, the UARS-constrained box model captures the main features of nitrogen species partitioning in the post-Pinatubo years, such as recovery of NO(x) after the eruption, their seasonal variability and vertical profiles. However, the model underestimates the NO2 content, particularly in the 30- to 7-mbar (approx. 23-32 km) range. Comparisons of the calculated temporal behavior of the partial columns of NO2 and HNO3 and ground-based measurements at 45 deg S and 45 deg N are also presented. Our analysis indicates that ground

  19. Nitrogen Species in the Post-Pinatubo Stratosphere: Model Analysis Utilizing UARS Measurements

    Science.gov (United States)

    Danilin, Michael Y.; Rodriguez, Jose M.; Hu, Wen-Jie; Ko, Malcolm K. W.; Weisenstein, Debra K.; Kumer, John B.; Mergenthaler, John L.; Russel, James M., III; Koike, Makoto; Yue, Glenn K.

    1999-01-01

    We present an analysis of the impact of heterogeneous chemistry on the partitioning of nitrogen species measured by the Upper Atmosphere Research Satellite (UARS) instruments. The UARS measurements utilized include N2O, HNO3, and ClONO2 from the cryogenic limb array etalon spectrometer (CLAES), version 7 (v.7), and temperature, methane, ozone, H2O, HCl, NO and NO2 from the halogen occultation experiment (HALOE), version 18. The analysis is carried out for the UARS data obtained between January 1992 and September 1994 in the 100-to 1-mbar (approx. 17-47 km) altitude range and over 10 degrees latitude bins from 70 S to 70 N. The spatiotemporal evolution of aerosol surface area density (SAD) is adopted from analysis of the Stratospheric Aerosol and Gas Experiment (SAGE) II data. A diurnal steady state photochemical box model, constrained by the temperature, ozone, H2O, CH4, aerosol SAD, and columns of O2 and O3 above the point of interest, has been used as the main tool to analyze these data. Total inorganic nitrogen (NOy) is obtained by three different methods: (1) as a sum of the UARS-measured NO, NO2, HNO3, and ClONO2; (2) from the N2O-NOy correlation, and (3) from the CH4-NOy correlation. To validate our current understanding of stratospheric heterogeneous chemistry for post-Pinatubo conditions, the model-calculated monthly averaged NOx/NOy ratios and the NO, NO2, and HNO3 profiles are compared with the UARS-derived data. In general, the UARS-constrained box model captures the main features of nitrogen species partitioning in the post-Pinatubo years, such as recovery of NOx after the eruption, their seasonal variability and vertical profiles. However, the model underestimates the NO2 content, particularly in the 30- to 7-mbar (approx.23-32 km) range. Comparisons of the calculated temporal behavior of the partial columns of NO2 and HNO3 and ground-based measurements at 45 S and 45 N are also presented. Our analysis indicates that ground-based and HALOE v.18

  20. Using observation-level random effects to model overdispersion in count data in ecology and evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2014-10-01

    Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.

  1. Modeling Water Utility Investments and Improving Regulatory Policies using Economic Optimisation in England and Wales

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2012-12-01

    Water utilities in England and Wales are regulated natural monopolies called 'water companies'. Water companies must obtain periodic regulatory approval for all investments (new supply infrastructure or demand management measures). Both water companies and their regulators use results from least economic cost capacity expansion optimisation models to develop or assess water supply investment plans. This presentation first describes the formulation of a flexible supply-demand planning capacity expansion model for water system planning. The model uses a mixed integer linear programming (MILP) formulation to choose the least-cost schedule of future supply schemes (reservoirs, desalination plants, etc.) and demand management (DM) measures (leakage reduction, water efficiency and metering options) and bulk transfers. Decisions include what schemes to implement, when to do so, how to size schemes and how much to use each scheme during each year of an n-year long planning horizon (typically 30 years). In addition to capital and operating (fixed and variable) costs, the estimated social and environmental costs of schemes are considered. Each proposed scheme is costed discretely at one or more capacities following regulatory guidelines. The model uses a node-link network structure: water demand nodes are connected to supply and demand management (DM) options (represented as nodes) or to other demand nodes (transfers). Yields from existing and proposed are estimated separately using detailed water resource system simulation models evaluated over the historical period. The model simultaneously considers multiple demand scenarios to ensure demands are met at required reliability levels; use levels of each scheme are evaluated for each demand scenario and weighted by scenario likelihood so that operating costs are accurately evaluated. Multiple interdependency relationships between schemes (pre-requisites, mutual exclusivity, start dates, etc.) can be accounted for by

  2. A Parent Coach Model for Well-Child Care Among Low-Income Children: A Randomized Controlled Trial.

    Science.gov (United States)

    Coker, Tumaini R; Chacon, Sandra; Elliott, Marc N; Bruno, Yovana; Chavis, Toni; Biely, Christopher; Bethell, Christina D; Contreras, Sandra; Mimila, Naomi A; Mercado, Jeffrey; Chung, Paul J

    2016-03-01

    The goal of this study was to examine the effects of a new model for well-child care (WCC), the Parent-focused Redesign for Encounters, Newborns to Toddlers (PARENT), on WCC quality and health care utilization among low-income families. PARENT includes 4 elements designed by using a stakeholder-engaged process: (1) a parent coach (ie, health educator) to provide anticipatory guidance, psychosocial screening and referral, and developmental/behavioral guidance and screening at each well-visit; (2) a Web-based tool for previsit screening; (3) an automated text message service to provide periodic, age-specific health messages to families; and (4) a brief, problem-focused encounter with the pediatric clinician. The Promoting Healthy Development Survey-PLUS was used to assess receipt of recommended WCC services at 12 months' postenrollment. Intervention effects were examined by using bivariate analyses. A total of 251 parents with a child aged ≤12 months were randomized to receive either the control (usual WCC) or the intervention (PARENT); 90% completed the 12-month assessment. Mean child age at enrollment was 4.5 months; 64% had an annual household income less than $20,000. Baseline characteristics for the intervention and control groups were similar. Intervention parents scored higher on all preventive care measures (anticipatory guidance, health information, psychosocial assessment, developmental screening, and parental developmental/behavioral concerns addressed) and experiences of care measures (family-centeredness, helpfulness, and overall rating of care). Fifty-two percent fewer intervention children had ≥2 emergency department visits over the 12-month period. There were no significant differences in WCC or sick visits/urgent care utilization. A parent coach-led model for WCC may improve the receipt of comprehensive WCC for low-income families, and it may potentially lead to cost savings by reducing emergency department utilization. Copyright © 2016 by the

  3. 2D stochastic-integral models for characterizing random grain noise in titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H. [Victor Technologies, LLC, PO Box 7706, Bloomington, IN 47407-7706 (United States); Cherry, Matthew [University of Dayton Research Institute, 300 College Park Dr., Dayton, OH 45410 (United States); Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P. [Air Force Research Laboratory (AFRL/RXC), Wright Patterson AFB OH 45433-7817 (United States)

    2014-02-18

    We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Loève (K-L) expansion for the random Euler angles, θ and φ, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.

  4. A random regression model in analysis of litter size in pigs | Lukovi& ...

    African Journals Online (AJOL)

    Dispersion parameters for number of piglets born alive (NBA) were estimated using a random regression model (RRM). Two data sets of litter records from the Nemščak farm in Slovenia were used for analyses. The first dataset (DS1) included records from the first to the sixth parity. The second dataset (DS2) was extended ...

  5. Random regression models for daily feed intake in Danish Duroc pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just

    The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test statio...

  6. An Interactive Computer Model for Improved Student Understanding of Random Particle Motion and Osmosis

    Science.gov (United States)

    Kottonau, Johannes

    2011-01-01

    Effectively teaching the concepts of osmosis to college-level students is a major obstacle in biological education. Therefore, a novel computer model is presented that allows students to observe the random nature of particle motion simultaneously with the seemingly directed net flow of water across a semipermeable membrane during osmotic…

  7. A comparison of methods for representing random taste heterogeneity in discrete choice models

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Hess, Stephane

    2009-01-01

    This paper reports the findings of a systematic study using Monte Carlo experiments and a real dataset aimed at comparing the performance of various ways of specifying random taste heterogeneity in a discrete choice model. Specifically, the analysis compares the performance of two recent advanced...

  8. Semi-parametric estimation of random effects in a logistic regression model using conditional inference

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2016-01-01

    This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...

  9. Comparing risk attitudes of organic and non-organic farmers with a Bayesian random coefficient model

    NARCIS (Netherlands)

    Gardebroek, C.

    2006-01-01

    Organic farming is usually considered to be more risky than conventional farming, but the risk aversion of organic farmers compared with that of conventional farmers has not been studied. Using a non-structural approach to risk estimation, a Bayesian random coefficient model is used to obtain

  10. Experimental validation of the stochastic model of a randomly fluctuating transmission-line

    NARCIS (Netherlands)

    Sy, O.O.; Vaessen, J.A.H.M.; Beurden, M.C. van; Michielsen, B.L.; Tijhuis, A.G.; Zwamborn, A.P.M.; Groot, J.S.

    2008-01-01

    A modeling method is proposed to quantify uncertainties affecting electromagnetic interactions. This method considers the uncertainties as random and measures them thanks to probability theory. A practical application is considered through the case of a transmission-line of varying geometry,

  11. Firm-Related Training Tracks: A Random Effects Ordered Probit Model

    Science.gov (United States)

    Groot, Wim; van den Brink, Henriette Maassen

    2003-01-01

    A random effects ordered response model of training is estimated to analyze the existence of training tracks and time varying coefficients in training frequency. Two waves of a Dutch panel survey of workers are used covering the period 1992-1996. The amount of training received by workers increased during the period 1994-1996 compared to…

  12. Randomized Controlled Trial of Video Self-Modeling Following Speech Restructuring Treatment for Stuttering

    Science.gov (United States)

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-01-01

    Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…

  13. The van Hemmen model and effect of random crystalline anisotropy field

    Energy Technology Data Exchange (ETDEWEB)

    Morais, Denes M. de [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900 Cuiabá, Mato Grosso (Brazil); Godoy, Mauricio, E-mail: mgodoy@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900 Cuiabá, Mato Grosso (Brazil); Arruda, Alberto S. de, E-mail: aarruda@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900 Cuiabá, Mato Grosso (Brazil); Silva, Jonathas N. da [Universidade Estadual Paulista, 14800-901, Araraquara, São Paulo (Brazil); Ricardo de Sousa, J. [Instituto Nacional de Sistemas Complexos, Departamento de Fisica, Universidade Federal do Amazona, 69077-000, Manaus, Amazonas (Brazil)

    2016-01-15

    In this work, we have presented the generalized phase diagrams of the van Hemmen model for spin S=1 in the presence of an anisotropic term of random crystalline field. In order to study the critical behavior of the phase transitions, we employed a mean-field Curie–Weiss approach, which allows calculation of the free energy and the equations of state of the model. The phase diagrams obtained here displayed tricritical behavior, with second-order phase transition lines separated from the first-order phase transition lines by a tricritical point. - Highlights: • Several phase diagrams are obtained for the model. • The influence of the random crystalline anisotropy field on the model is investigated. • Three ordered (spin-glass, ferromagnetic and mixed) phases are found. • The tricritical behavior is examined.

  14. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    Science.gov (United States)

    Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.

    2015-02-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.

  15. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    Energy Technology Data Exchange (ETDEWEB)

    Mishchenko, Michael I., E-mail: michael.i.mishchenko@nasa.gov [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Dlugach, Janna M. [Main Astronomical Observatory of the National Academy of Sciences of Ukraine, 27 Zabolotny Str., 03680, Kyiv (Ukraine); Yurkin, Maxim A. [Voevodsky Institute of Chemical Kinetics and Combustion, SB RAS, Institutskaya str. 3, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, Pirogova 2, 630090 Novosibirsk (Russian Federation); Bi, Lei [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Cairns, Brian [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Liu, Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Columbia University, 2880 Broadway, New York, NY 10025 (United States); Panetta, R. Lee [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Travis, Larry D. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Yang, Ping [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Zakharova, Nadezhda T. [Trinnovim LLC, 2880 Broadway, New York, NY 10025 (United States)

    2016-05-16

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  16. A novel murine model of Fusarium solani keratitis utilizing fluorescent labeled fungi.

    Science.gov (United States)

    Zhang, Hongmin; Wang, Liya; Li, Zhijie; Liu, Susu; Xie, Yanting; He, Siyu; Deng, Xianming; Yang, Biao; Liu, Hui; Chen, Guoming; Zhao, Huiwen; Zhang, Junjie

    2013-05-01

    Fungal keratitis is a common disease that causes blindness. An effective animal model for fungal keratitis is essential for advancing research on this disease. Our objective is to develop a novel mouse model of Fusarium solani keratitis through the inoculation of fluorescent-labeled fungi into the cornea to facilitate the accurate and early identification and screening of fungal infections. F. solani was used as the model fungus in this study. In in vitro experiment, the effects of Calcofluor White (CFW) staining concentration and duration on the fluorescence intensity of F. solani were determined through the mean fluorescence intensity (MFI); the effects of CFW staining on the growth of F. solani were determined by the colony diameter. In in vivo experiment, the F. solani keratitis mice were induced and divided into a CFW-unlabeled and CFW-labeled groups. The positive rate, corneal lesion score and several positive rate determination methods were measured. The MFIs of F. solani in the 30 μg/ml CFW-30 min, 90 μg/ml CFW-10 min and 90 μg/ml CFW-30 min groups were higher than that in the 10 μg/ml CFW-10 min group (P  0.05). No significant differences (P > 0.05) were observed for the positive rate or the corneal lesion scores between the CFW-unlabeled and the CFW-labeled group. On day 1 and 2, the positive rates of the infected corneas in the scraping group were lower than those in the fluorescence microscopy group (P  0.05). Thus, these experiments established a novel murine model of F. solani keratitis utilizing fluorescent labeled fungi. This model facilitates the accurate identification and screening of fungal infections during the early stages of fungal keratitis and provides a novel and reliable technology to study the fungal keratitis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Choice, knowledge, and utilization of a practice theory: a national study of occupational therapists who use the model of human occupation.

    Science.gov (United States)

    Lee, Sun Wook; Taylor, Renee; Kielhofner, Gary

    2009-01-01

    Objective. To identify how therapists choose and use the Model of Human Occupation (MOHO). Method. A systematic random sample of 1,000 occupational therapists was surveyed as to whether they used MOHO in their practice. Those who were using MOHO were then sent a detailed questionnaire; 259 therapists responded to the survey questionnaire, forming a response rate of 60.2 percent. Results. A total of 80.7% of therapists indicated that they had used MOHO in their practice. A number of factors influenced therapists' choice to use MOHO. The most frequently cited factors were therapists' judgment that MOHO fit their own practice philosophy and their clients' needs. Most therapists used multiple means of learning about MOHO, and the number of means they used was related to both self-reported levels of knowledge and utilization of this model. Many therapists are also actively engaged in sharing their knowledge and utilization of this model. Conclusion. Multiple factors contribute to therapists' choice and use of MOHO. Therapists actively make a decision to use MOHO in practice and put forth substantial efforts to learn and share their knowledge of MOHO.

  18. Reduced-Order Monte Carlo Modeling of Radiation Transport in Random Media

    Science.gov (United States)

    Olson, Aaron

    The ability to perform radiation transport computations in stochastic media is essential for predictive capabilities in applications such as weather modeling, radiation shielding involving non-homogeneous materials, atmospheric radiation transport computations, and transport in plasma-air structures. Due to the random nature of such media, it is often not clear how to model or otherwise compute on many forms of stochastic media. Several approaches to evaluation of transport quantities for some stochastic media exist, though such approaches often either yield considerable error or are quite computationally expensive. We model stochastic media using the Karhunen-Loeve (KL) expansion, seek to improve efficiency through use of stochastic collocation (SC), and provide higher-order information of output values using the polynomial chaos expansion (PCE). We study and demonstrate method convergence and apply the new methods to both spatially continuous and spatially discontinuous stochastic media. New methods are shown to produce accurate solutions for reasonable computational cost for several problem when compared with existing solution methods. Spatially random media are modeled using transformations of the Gaussian-distributed KL expansion-continuous random media with a lognormal transformation and discontinuous random media with a Nataf transformation. Each transformation preserves second-order statistics for the quantity-atom density or material index, respectively-being modeled. The Nystrom method facilitates numerical solution of the KL eigenvalues and eigenvectors, and a variety of methods are investigated for sampling KL eigenfunctions as a function of solved eigenvectors. The infinite KL expansion is truncated to a finite number of terms each containing a random variable, and material realizations are created by either randomly or deterministically sampling from the random variables. Deterministic sampling is performed with either isotropic or anisotropic

  19. Investigating the Dynamic Effects of Counterfeits with a Random Changepoint Simultaneous Equation Model

    OpenAIRE

    Yi Qian; Hui Xie

    2011-01-01

    Using a unique panel dataset and a new model, this article investigates the dynamic effects of counterfeit sales on authentic-product price dynamics. We propose a Bayesian random-changepoint simultaneous equation model that simultaneously takes into account three important features in empirical studies: (1) Endogeneity of a market entry, (2) Nonstationarity of the entry effects and (3) Heterogeneity of the firms' response behaviors. Besides accounting for the endogeneity of counterfeiting, th...

  20. Encoding Sequential Information in Vector Space Models of Semantics: Comparing Holographic Reduced Representation and Random Permutation

    OpenAIRE

    Recchia, Gabriel; Jones, Michael; Sahlgren, Magnus; Kanerva, Pentti

    2010-01-01

    Encoding information about the order in which words typically appear has been shown to improve the performance of high-dimensional semantic space models. This requires an encoding operation capable of binding together vectors in an order-sensitive way, and efficient enough to scale to large text corpora. Although both circular convolution and random permutations have been enlisted for this purpose in semantic models, these operations have never been systematically compared. In Experiment 1 we...

  1. UA(1) breaking and phase transition in chiral random matrix model

    OpenAIRE

    Sano, T.; Fujii, H.; Ohtani, M

    2009-01-01

    We propose a chiral random matrix model which properly incorporates the flavor-number dependence of the phase transition owing to the \\UA(1) anomaly term. At finite temperature, the model shows the second-order phase transition with mean-field critical exponents for two massless flavors, while in the case of three massless flavors the transition turns out to be of the first order. The topological susceptibility satisfies the anomalous \\UA(1) Ward identity and decreases gradually with the temp...

  2. TESTING BRAND VALUE MEASUREMENT METHODS IN A RANDOM COEFFICIENT MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Szõcs Attila

    2014-07-01

    Full Text Available Our objective is to provide a framework for measuring brand equity, that is, the added value to the product endowed by the brand. Based on a demand and supply model, we propose a structural model that enables testing the structural effect of brand equity (demand side effect on brand value (supply side effect, using Monte Carlo simulation. Our main research question is which of the three brand value measurement methods (price premium, revenue premium and profit premium is more suitable from the perspective of the structural link between brand equity and brand value. Our model is based on recent developments in random coefficients model applications.

  3. Towards utilizing GPUs in information visualization: a model and implementation of image-space operations.

    Science.gov (United States)

    McDonnel, Bryan; Elmqvist, Niklas

    2009-01-01

    Modern programmable GPUs represent a vast potential in terms of performance and visual flexibility for information visualization research, but surprisingly few applications even begin to utilize this potential. In this paper, we conjecture that this may be due to the mismatch between the high-level abstract data types commonly visualized in our field, and the low-level floating-point model supported by current GPU shader languages. To help remedy this situation, we present a refinement of the traditional information visualization pipeline that is amenable to implementation using GPU shaders. The refinement consists of a final image-space step in the pipeline where the multivariate data of the visualization is sampled in the resolution of the current view. To concretize the theoretical aspects of this work, we also present a visual programming environment for constructing visualization shaders using a simple drag-and-drop interface. Finally, we give some examples of the use of shaders for well-known visualization techniques.

  4. Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides.

    Science.gov (United States)

    Roy, Souvik; Nguyen, Thuy-Ai D; Gan, Lu; Jones, Anne K

    2015-09-07

    Two synthetic strategies for incorporating diiron analogues of [FeFe]-hydrogenases into short peptides via phosphine functional groups are described. First, utilizing the amine side chain of lysine as an anchor, phosphine carboxylic acids can be coupled via amide formation to resin-bound peptides. Second, artificial, phosphine-containing amino acids can be directly incorporated into peptides via solution phase peptide synthesis. The second approach is demonstrated using three amino acids each with a different phosphine substituent (diphenyl, diisopropyl, and diethyl phosphine). In total, five distinct monophosphine-substituted, diiron model complexes were prepared by reaction of the phosphine-peptides with diiron hexacarbonyl precursors, either (μ-pdt)Fe2(CO)6 or (μ-bdt)Fe2(CO)6 (pdt = propane-1,3-dithiolate, bdt = benzene-1,2-dithiolate). Formation of the complexes was confirmed by UV/Vis, FTIR and (31)P NMR spectroscopy. Electrocatalysis by these complexes is reported in the presence of acetic acid in mixed aqueous-organic solutions. Addition of water results in enhancement of the catalytic rates.

  5. Research utilization in the building industry: decision model and preliminary assessment

    Energy Technology Data Exchange (ETDEWEB)

    Watts, R.L.; Johnson, D.R.; Smith, S.A.; Westergard, E.J.

    1985-10-01

    The Research Utilization Program was conceived as a far-reaching means for managing the interactions of the private sector and the federal research sector as they deal with energy conservation in buildings. The program emphasizes a private-public partnership in planning a research agenda and in applying the results of ongoing and completed research. The results of this task support the hypothesis that the transfer of R and D results to the buildings industry can be accomplished more efficiently and quickly by a systematic approach to technology transfer. This systematic approach involves targeting decision makers, assessing research and information needs, properly formating information, and then transmitting the information through trusted channels. The purpose of this report is to introduce elements of a market-oriented knowledge base, which would be useful to the Building Systems Division, the Office of Buildings and Community Systems and their associated laboratories in managing a private-public research partnership on a rational systematic basis. This report presents conceptual models and data bases that can be used in formulating a technology transfer strategy and in planning technology transfer programs.

  6. A Statistical Model Updating Method of Beam Structures with Random Parameters under Static Load

    Directory of Open Access Journals (Sweden)

    Zhifeng Wu

    2017-06-01

    Full Text Available This paper presents a new statistical model updating method of beam structures with random parameters under static load. The new updating method considers structural parameters and measurement errors to be random. To reduce the unmeasured degrees of freedom in the finite element model, a static condensation technique is used in this method. A statistical model updating equation with respect to element updated factors is established afterwards. The element updated factors are expanded as random multivariate power series. Using a high-order perturbation technique, the statistical model updating equation can be solved to obtain the coefficients of the power series expansions of the element updated factors. The results of two numerical examples show that for the solution of the statistical model updating equation, the accuracy of the proposed method agrees with that of the Monte Carlo simulation method very well. The static responses obtained by the updated finite element model coincide with the measured results very well. Finally, a series of static load tests of the concrete beam are conducted to testify the effectiveness of the proposed method.

  7. Optimal urban water conservation strategies considering embedded energy: coupling end-use and utility water-energy models.

    Science.gov (United States)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Spang, E. S.; Loge, F. J.

    2014-12-01

    Although most freshwater resources are used in agriculture, a greater amount of energy is consumed per unit of water supply for urban areas. Therefore, efforts to reduce the carbon footprint of water in cities, including the energy embedded within household uses, can be an order of magnitude larger than for other water uses. This characteristic of urban water systems creates a promising opportunity to reduce global greenhouse gas emissions, particularly given rapidly growing urbanization worldwide. Based on a previous Water-Energy-CO2 emissions model for household water end uses, this research introduces a probabilistic two-stage optimization model considering technical and behavioral decision variables to obtain the most economical strategies to minimize household water and water-related energy bills given both water and energy price shocks. Results show that adoption rates to reduce energy intensive appliances increase significantly, resulting in an overall 20% growth in indoor water conservation if household dwellers include the energy cost of their water use. To analyze the consequences on a utility-scale, we develop an hourly water-energy model based on data from East Bay Municipal Utility District in California, including the residential consumption, obtaining that water end uses accounts for roughly 90% of total water-related energy, but the 10% that is managed by the utility is worth over 12 million annually. Once the entire end-use + utility model is completed, several demand-side management conservation strategies were simulated for the city of San Ramon. In this smaller water district, roughly 5% of total EBMUD water use, we found that the optimal household strategies can reduce total GHG emissions by 4% and utility's energy cost over 70,000/yr. Especially interesting from the utility perspective could be the "smoothing" of water use peaks by avoiding daytime irrigation that among other benefits might reduce utility energy costs by 0.5% according to our

  8. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  9. A theory of solving TAP equations for Ising models with general invariant random matrices

    DEFF Research Database (Denmark)

    Opper, Manfred; Çakmak, Burak; Winther, Ole

    2016-01-01

    We consider the problem of solving TAP mean field equations by iteration for Ising models with coupling matrices that are drawn at random from general invariant ensembles. We develop an analysis of iterative algorithms using a dynamical functional approach that in the thermodynamic limit yields...... the iteration dependent on a Gaussian distributed field only. The TAP magnetizations are stable fixed points if a de Almeida–Thouless stability criterion is fulfilled. We illustrate our method explicitly for coupling matrices drawn from the random orthogonal ensemble....

  10. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  11. Study on the Business Cycle Model with Fractional-Order Time Delay under Random Excitation

    Directory of Open Access Journals (Sweden)

    Zifei Lin

    2017-07-01

    Full Text Available Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method is applied to obtain the approximate analytical solution. Numerical simulations are done to verify the method. The effects of the fractional order, time delay, economic control and random excitation on the amplitude of the economy system are investigated. The results show that time delay, fractional order and intensity of random excitation can all magnify the amplitude and increase the volatility of the economy system.

  12. Explaining Distortions in Utility Elicitation through the Rank-Dependent Model for Risky Choices

    NARCIS (Netherlands)

    P.P. Wakker (Peter); A.M. Stiggelbout (Anne)

    1995-01-01

    textabstractThe standard gamble (SG) method has been accepted as the gold standard for the elicitation of utility when risk or uncertainty is involved in decisions, and thus for the measurement of utility in medical decisions. Unfortunately, the SG method is distorted by a general dislike for

  13. Analysis of the impact of decentralized solar technology on electric utilities: comparison and synthesis of models. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, S.; Blair, P.

    1980-11-20

    The validation of the physical submodels of three solar-electric utility interface models is described. The validation problem is divided into two components, the accuracy of the submodels themselves and the accuracy of the data typically used to run these models. The data set required to study these problems with respect to utility requirements is discussed and its collection in the Philadelphia Metropolitan area described. The instrumentation employed in the gathering of the data is covered. Error statistics of data and submodel accuracy are presented and the current status of the study is presented.

  14. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  15. Modeling longitudinal data with nonparametric multiplicative random effects jointly with survival data.

    Science.gov (United States)

    Ding, Jimin; Wang, Jane-Ling

    2008-06-01

    In clinical studies, longitudinal biomarkers are often used to monitor disease progression and failure time. Joint modeling of longitudinal and survival data has certain advantages and has emerged as an effective way to mutually enhance information. Typically, a parametric longitudinal model is assumed to facilitate the likelihood approach. However, the choice of a proper parametric model turns out to be more elusive than models for standard longitudinal studies in which no survival endpoint occurs. In this article, we propose a nonparametric multiplicative random effects model for the longitudinal process, which has many applications and leads to a flexible yet parsimonious nonparametric random effects model. A proportional hazards model is then used to link the biomarkers and event time. We use B-splines to represent the nonparametric longitudinal process, and select the number of knots and degrees based on a version of the Akaike information criterion (AIC). Unknown model parameters are estimated through maximizing the observed joint likelihood, which is iteratively maximized by the Monte Carlo Expectation Maximization (MCEM) algorithm. Due to the simplicity of the model structure, the proposed approach has good numerical stability and compares well with the competing parametric longitudinal approaches. The new approach is illustrated with primary biliary cirrhosis (PBC) data, aiming to capture nonlinear patterns of serum bilirubin time courses and their relationship with survival time of PBC patients.

  16. Reduction in hydroxyhydroquinone from coffee increases postprandial fat utilization in healthy humans: a randomized double-blind, cross-over trial.

    Science.gov (United States)

    Soga, Satoko; Ota, Noriyasu; Shimotoyodome, Akira

    2017-07-01

    The present study aimed to clarify the effect of reduction in hydroxyhydroquinone (HHQ) from roasted coffee on energy utilization in humans. Indirect calorimetry showed that one-week ingestion of HHQ-reduced coffee led to significantly higher postprandial fat utilization than that of HHQ-containing coffee. This finding indicates that reduction in HHQ from coffee increases postprandial fat utilization.

  17. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Science.gov (United States)

    Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian

    2016-01-01

    Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  18. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Directory of Open Access Journals (Sweden)

    Xiangke Pu

    Full Text Available Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  19. Random cascades on wavelet trees and their use in analyzing and modeling natural images

    Science.gov (United States)

    Wainwright, Martin J.; Simoncelli, Eero P.; Willsky, Alan S.

    2000-12-01

    We develop a new class of non-Gaussian multiscale stochastic processes defined by random cascades on trees of wavelet or other multiresolution coefficients. These cascades reproduce a rich semi-parametric class of random variables known as Gaussian scale mixtures. We demonstrate that this model class can accurately capture the remarkably regular and non- Gaussian features of natural images in a parsimonious fashion, involving only a small set of parameters. In addition, this model structure leads to efficient algorithms for image processing. In particular, we develop a Newton- like algorithm for MAP estimation that exploits very fast algorithm for linear-Gaussian estimation on trees, and hence is efficient. On the basis of this MAP estimator, we develop and illustrate a denoising technique that is based on a global prior model, and preserves the structure of natural images.

  20. Random Regression Models Based On The Skew Elliptically Contoured Distribution Assumptions With Applications To Longitudinal Data *

    Science.gov (United States)

    Zheng, Shimin; Rao, Uma; Bartolucci, Alfred A.; Singh, Karan P.

    2011-01-01

    Bartolucci et al.(2003) extended the distribution assumption from the normal (Lyles et al., 2000) to the elliptical contoured distribution (ECD) for random regression models used in analysis of longitudinal data accounting for both undetectable values and informative drop-outs. In this paper, the random regression models are constructed on the multivariate skew ECD. A real data set is used to illustrate that the skew ECDs can fit some unimodal continuous data better than the Gaussian distributions or more general continuous symmetric distributions when the symmetric distribution assumption is violated. Also, a simulation study is done for illustrating the model fitness from a variety of skew ECDs. The software we used is SAS/STAT, V. 9.13. PMID:21637734

  1. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  2. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  3. Model for continuously scanning ultrasound vibrometer sensing displacements of randomly rough vibrating surfaces.

    Science.gov (United States)

    Ratilal, Purnima; Andrews, Mark; Donabed, Ninos; Galinde, Ameya; Rappaport, Carey; Fenneman, Douglas

    2007-02-01

    An analytic model is developed for the time-dependent ultrasound field reflected off a randomly rough vibrating surface for a continuously scanning ultrasound vibrometer system in bistatic configuration. Kirchhoff's approximation to Green's theorem is applied to model the three-dimensional scattering interaction of the ultrasound wave field with the vibrating rough surface. The model incorporates the beam patterns of both the transmitting and receiving ultrasound transducers and the statistical properties of the rough surface. Two methods are applied to the ultrasound system for estimating displacement and velocity amplitudes of an oscillating surface: incoherent Doppler shift spectra and coherent interferometry. Motion of the vibrometer over the randomly rough surface leads to time-dependent scattering noise that causes a randomization of the received signal spectrum. Simulations with the model indicate that surface displacement and velocity estimation are highly dependent upon the scan velocity and projected wavelength of the ultrasound vibrometer relative to the roughness height standard deviation and correlation length scales of the rough surface. The model is applied to determine limiting scan speeds for ultrasound vibrometer measuring ground displacements arising from acoustic or seismic excitation to be used in acoustic landmine confirmation sensing.

  4. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    -analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  5. Random effects modeling of multiple binomial responses using the multivariate binomial logit-normal distribution.

    Science.gov (United States)

    Coull, B A; Agresti, A

    2000-03-01

    The multivariate binomial logit-normal distribution is a mixture distribution for which, (i) conditional on a set of success probabilities and sample size indices, a vector of counts is independent binomial variates, and (ii) the vector of logits of the parameters has a multivariate normal distribution. We use this distribution to model multivariate binomial-type responses using a vector of random effects. The vector of logits of parameters has a mean that is a linear function of explanatory variables and has an unspecified or partly specified covariance matrix. The model generalizes and provides greater flexibility than the univariate model that uses a normal random effect to account for positive correlations in clustered data. The multivariate model is useful when different elements of the response vector refer to different characteristics, each of which may naturally have its own random effect. It is also useful for repeated binary measurement of a single response when there is a nonexchangeable association structure, such as one often expects with longitudinal data or when negative association exists for at least one pair of responses. We apply the model to an influenza study with repeated responses in which some pairs are negatively associated and to a developmental toxicity study with continuation-ratio logits applied to an ordinal response with clustered observations.

  6. Nakagami Markov random field as texture model for ultrasound RF envelope image.

    Science.gov (United States)

    Bouhlel, N; Sevestre-Ghalila, S

    2009-06-01

    The aim of this paper is to propose a new Markov random field (MRF) model for the backscattered ultrasonic echo in order to get information about backscatter characteristics, such as the scatterer density, amplitude and spacing. The model combines the Nakagami distribution that describes the envelope of backscattered echo with spatial interaction using MRF. In this paper, the parameters of the model and the estimation parameter method are introduced. Computer simulation using ultrasound radio-frequency (RF) simulator and experiments on choroidal malignant melanoma have been undertaken to test the validity of the model. The relationship between the parameters of MRF model and the backscatter characteristics has been established. Furthermore, the ability of the model to distinguish between normal and abnormal tissue has been proved. All the results can show the success of the model.

  7. SHER: a colored petri net based random mobility model for wireless communications.

    Directory of Open Access Journals (Sweden)

    Naeem Akhtar Khan

    Full Text Available In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER, a novel two-dimensional (2-D Colored Petri net (CPN based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and

  8. SHER: a colored petri net based random mobility model for wireless communications.

    Science.gov (United States)

    Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal

    2015-01-01

    In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors

  9. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling

    Directory of Open Access Journals (Sweden)

    Fuqun Zhou

    2016-10-01

    Full Text Available Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS. It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  10. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  11. Bayesian hierarchical models for cost-effectiveness analyses that use data from cluster randomized trials.

    Science.gov (United States)

    Grieve, Richard; Nixon, Richard; Thompson, Simon G

    2010-01-01

    Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where randomization is at the level of the cluster (for example, the hospital or primary care provider) rather than the individual. Costs (and outcomes) within clusters may be correlated so that the assumption made by standard bivariate regression models, that observations are independent, is incorrect. This study develops a flexible modeling framework to acknowledge the clustering in CEA that use CRTs. The authors extend previous Bayesian bivariate models for CEA of multicenter trials to recognize the specific form of clustering in CRTs. They develop new Bayesian hierarchical models (BHMs) that allow mean costs and outcomes, and also variances, to differ across clusters. They illustrate how each model can be applied using data from a large (1732 cases, 70 primary care providers) CRT evaluating alternative interventions for reducing postnatal depression. The analyses compare cost-effectiveness estimates from BHMs with standard bivariate regression models that ignore the data hierarchy. The BHMs show high levels of cost heterogeneity across clusters (intracluster correlation coefficient, 0.17). Compared with standard regression models, the BHMs yield substantially increased uncertainty surrounding the cost-effectiveness estimates, and altered point estimates. The authors conclude that ignoring clustering can lead to incorrect inferences. The BHMs that they present offer a flexible modeling framework that can be applied more generally to CEA that use CRTs.

  12. Modelling and Simulation of Photosynthetic Microorganism Growth: Random Walk vs. Finite Difference Method

    Czech Academy of Sciences Publication Activity Database

    Papáček, Š.; Matonoha, Ctirad; Štumbauer, V.; Štys, D.

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2022-2032 ISSN 0378-4754. [Modelling 2009. IMACS Conference on Mathematical Modelling and Computational Methods in Applied Sciences and Engineering /4./. Rožnov pod Radhoštěm, 22.06.2009-26.06.2009] Grant - others:CENAKVA(CZ) CZ.1.05/2.1.00/01.0024; GA JU(CZ) 152//2010/Z Institutional research plan: CEZ:AV0Z10300504 Keywords : multiscale modelling * distributed parameter system * boundary value problem * random walk * photosynthetic factory Subject RIV: EI - Biotechnology ; Bionics Impact factor: 0.836, year: 2012

  13. Clinical utility of the four-quadrant model of facilitated learning: perspectives of experienced occupational therapists.

    Science.gov (United States)

    Greber, Craig; Ziviani, Jenny; Rodger, Sylvia

    2011-06-01

    This study explored perspectives of experienced occupational therapists regarding teaching-learning approaches used during intervention. The aim was to ascertain the clinical utility of the Four-Quadrant Model of Facilitated Learning (4QM) (Greber, Ziviani, & Rodger, 2007a) by understanding how it might enhance clinical competency when applying teaching-learning modalities.   Mixed methods were used to ascertain the perspectives of two groups of therapists with seven or more years experience in either adult (n=8) or paediatric (n=7) practice. A pre-discussion questionnaire was used to prime participants for an initial focus group centred on understanding how participants used teaching-learning within occupational therapy intervention. Following a brief description of the 4QM, a further session explored the perspectives of participants regarding the 4QM as a means of conceptualising and planning teaching-learning interventions. Irrespective of practice area, therapists considered teaching-learning approaches core to their practice, without necessarily identifying a clear process to guide their implementation. Proficiency in teaching-learning was generally seen to be gained through trial and error. Participants identified potential clinical applications for the 4QM as a useful structure to support the application of teaching-learning interventions, speculating that it would be particularly useful for novice clinicians. Participants endorsed the 4QM as a useful integrating framework to support the development of professional competencies related to planning interventions that use a teaching-learning approach. © 2011 The Authors. Australian Occupational Therapy Journal © 2011 Australian Association of Occupational Therapists.

  14. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases.

    Science.gov (United States)

    Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-05-01

    To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  15. A model for a correlated random walk based on the ordered extension of pseudopodia.

    Directory of Open Access Journals (Sweden)

    Peter J M Van Haastert

    Full Text Available Cell migration in the absence of external cues is well described by a correlated random walk. Most single cells move by extending protrusions called pseudopodia. To deduce how cells walk, we have analyzed the formation of pseudopodia by Dictyostelium cells. We have observed that the formation of pseudopodia is highly ordered with two types of pseudopodia: First, de novo formation of pseudopodia at random positions on the cell body, and therefore in random directions. Second, pseudopod splitting near the tip of the current pseudopod in alternating right/left directions, leading to a persistent zig-zag trajectory. Here we analyzed the probability frequency distributions of the angles between pseudopodia and used this information to design a stochastic model for cell movement. Monte Carlo simulations show that the critical elements are the ratio of persistent splitting pseudopodia relative to random de novo pseudopodia, the Left/Right alternation, the angle between pseudopodia and the variance of this angle. Experiments confirm predictions of the model, showing reduced persistence in mutants that are defective in pseudopod splitting and in mutants with an irregular cell surface.

  16. Mathematical model of a utility firm. Final technical report, Part IIB

    Energy Technology Data Exchange (ETDEWEB)

    1983-08-21

    The aim of this project was to develop an understanding of the dynamical processes that evolve within an electric utility firm, and without it. This volume covers organizational dynamics and control, and planning under uncertainty. (DLC)

  17. Equivalence of effective medium and random resistor network models for disorder-induced unsaturating linear magnetoresistance

    Science.gov (United States)

    Ramakrishnan, Navneeth; Lai, Ying Tong; Lara, Silvia; Parish, Meera M.; Adam, Shaffique

    2017-12-01

    A linear unsaturating magnetoresistance at high perpendicular magnetic fields, together with a quadratic positive magnetoresistance at low fields, has been seen in many different experimental materials, ranging from silver chalcogenides and thin films of InSb to topological materials like graphene and Dirac semimetals. In the literature, two very different theoretical approaches have been used to explain this classical magnetoresistance as a consequence of sample disorder. The phenomenological random resistor network model constructs a grid of four terminal resistors, each with a varying random resistance. The effective medium theory model imagines a smoothly varying disorder potential that causes a continuous variation of the local conductivity. Here, we demonstrate numerically that both models belong to the same universality class and that a restricted class of the random resistor network is actually equivalent to the effective medium theory. Both models are also in good agreement with experiments on a diverse range of materials. Moreover, we show that in both cases, a single parameter, i.e., the ratio of the fluctuations in the carrier density to the average carrier density, completely determines the magnetoresistance profile.

  18. Random regression models for milk, fat and protein in Colombian Buffaloes

    Directory of Open Access Journals (Sweden)

    Naudin Hurtado-Lugo

    2015-01-01

    Full Text Available Objective. Covariance functions for additive genetic and permanent environmental effects and, subsequently, genetic parameters for test-day milk (MY, fat (FY protein (PY yields and mozzarella cheese (MP in buffaloes from Colombia were estimate by using Random regression models (RRM with Legendre polynomials (LP. Materials and Methods. Test-day records of MY, FY, PY and MP from 1884 first lactations of buffalo cows from 228 sires were analyzed. The animals belonged to 14 herds in Colombia between 1995 and 2011. Ten monthly classes of days in milk were considered for test-day yields. The contemporary groups were defined as herd-year-month of milk test-day. Random additive genetic, permanent environmental and residual effects were included in the model. Fixed effects included the contemporary group, linear and quadratic effects of age at calving, and the average lactation curve of the population, which was modeled by third-order LP. Random additive genetic and permanent environmental effects were estimated by RRM using third- to- sixth-order LP. Residual variances were modeled using homogeneous and heterogeneous structures. Results. The heritabilities for MY, FY, PY and MP ranged from 0.38 to 0.05, 0.67 to 0.11, 0.50 to 0.07 and 0.50 to 0.11, respectively. Conclusions. In general, the RRM are adequate to describe the genetic variation in test-day of MY, FY, PY and MP in Colombian buffaloes.

  19. The Utility of the Prototype/Willingness Model in Predicting Alcohol Use among North American Indigenous Adolescents

    Science.gov (United States)

    Armenta, Brian E.; Hautala, Dane S.; Whitbeck, Les B.

    2015-01-01

    In the present study, we considered the utility of the prototype/willingness model in predicting alcohol use among North-American Indigenous adolescents. Specifically, using longitudinal data, we examined the associations among subjective drinking norms, positive drinker prototypes, drinking expectations (as a proxy of drinking willingness), and…

  20. Fire rehabilitation decisions at landscape scales: utilizing state-and-transition models developed through disturbance response grouping of ecological sites

    Science.gov (United States)

    Recognizing the utility of ecological sites and the associated state-and-transition model (STM) for decision support, the Bureau of Land Management in Nevada partnered with Nevada NRCS and the University of Nevada, Reno (UNR) in 2009 with the goal of creating a team that could (1) expedite developme...