WorldWideScience

Sample records for random utility models

  1. Hedonic travel cost and random utility models of recreation

    Energy Technology Data Exchange (ETDEWEB)

    Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  2. Random regret and random utility in the household purchase of a motor vehicle

    NARCIS (Netherlands)

    Beck, M.; Chorus, C.G.; Rose, J.M.; Hensher, D.A.

    2013-01-01

    Random utility maximisation is the preeminent behavioural theory used to model choices. An alternative paradigm, however, is random regret minimisation. While the majority of the literature examines the choices of individuals, this paper compares the choices of groups, as well individuals, in both

  3. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    Science.gov (United States)

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. A Logistic Regression Model with a Hierarchical Random Error Term for Analyzing the Utilization of Public Transport

    Directory of Open Access Journals (Sweden)

    Chong Wei

    2015-01-01

    Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.

  6. Utility based maintenance analysis using a Random Sign censoring model

    International Nuclear Information System (INIS)

    Andres Christen, J.; Ruggeri, Fabrizio; Villa, Enrique

    2011-01-01

    Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.

  7. A random regret minimization model of travel choice

    NARCIS (Netherlands)

    Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.

    2008-01-01

    Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel

  8. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  9. Consumer preferences for alternative fuel vehicles: Comparing a utility maximization and a regret minimization model

    International Nuclear Information System (INIS)

    Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco

    2013-01-01

    This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect

  10. Insider Models with Finite Utility in Markets with Jumps

    International Nuclear Information System (INIS)

    Kohatsu-Higa, Arturo; Yamazato, Makoto

    2011-01-01

    In this article we consider, under a Lévy process model for the stock price, the utility optimization problem for an insider agent whose additional information is the final price of the stock blurred with an additional independent noise which vanishes as the final time approaches. Our main interest is establishing conditions under which the utility of the insider is finite. Mathematically, the problem entails the study of a “progressive” enlargement of filtration with respect to random measures. We study the jump structure of the process which leads to the conclusion that in most cases the utility of the insider is finite and his optimal portfolio is bounded. This can be explained financially by the high risks involved in models with jumps.

  11. Estimating safety effects of pavement management factors utilizing Bayesian random effect models.

    Science.gov (United States)

    Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong

    2013-01-01

    Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic

  12. Unified Model for Generation Complex Networks with Utility Preferential Attachment

    International Nuclear Information System (INIS)

    Wu Jianjun; Gao Ziyou; Sun Huijun

    2006-01-01

    In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.

  13. Accounting for perception in random regret choice models: Weberian and generalized Weberian specifications

    NARCIS (Netherlands)

    Jang, S.; Rasouli, S.; Timmermans, H.J.P.

    2016-01-01

    Recently, regret-based choice models have been introduced in the travel behavior research community as an alternative to expected/random utility models. The fundamental proposition underlying regret theory is that individuals minimize the amount of regret they (are expected to) experience when

  14. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  15. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  16. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Science.gov (United States)

    Piantadosi, Steven T.; Hayden, Benjamin Y.

    2015-01-01

    Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613

  17. Utility-free heuristic models of two-option choice can mimic predictions of utility-stage models under many conditions

    Directory of Open Access Journals (Sweden)

    Steven T Piantadosi

    2015-04-01

    Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.

  18. A randomized trial of treatments for high-utilizing somatizing patients.

    Science.gov (United States)

    Barsky, Arthur J; Ahern, David K; Bauer, Mark R; Nolido, Nyryan; Orav, E John

    2013-11-01

    Somatization and hypochondriacal health anxiety are common sources of distress, impairment, and costly medical utilization in primary care practice. A range of interventions is needed to improve the care of these patients. To determine the effectiveness of two cognitive behavioral interventions for high-utilizing, somatizing patients, using the resources found in a routine care setting. Patients were randomly assigned to a two-step cognitive behavioral treatment program accompanied by a training seminar for their primary care physicians, or to relaxation training. Providers routinely working in these patients' primary care practices delivered the cognitive behavior therapy and relaxation training. A follow-up assessment was completed immediately prior to treatment and 6 and 12 months later. Eighty-nine medical outpatients with elevated levels of somatization, hypochondriacal health anxiety, and medical care utilization. Somatization and hypochondriasis, overall psychiatric distress, and role impairment were assessed with well-validated, self-report questionnaires. Outpatient visits and medical care costs before and after the intervention were obtained from the encounter claims database. At 6 month and 12 month follow-up, both intervention groups showed significant improvements in somatization (p somatization, hypochondriacal symptoms, overall psychiatric distress, and role function. They also reduced the ambulatory visits and costs of these high utilizing outpatients.

  19. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...

  1. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...

  2. Generalization of Random Intercept Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2013-10-01

    Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.

  3. The utility target market model

    International Nuclear Information System (INIS)

    Leng, G.J.; Martin, J.

    1994-01-01

    A new model (the Utility Target Market Model) is used to evaluate the economic benefits of photovoltaic (PV) power systems located at the electrical utility customer site. These distributed PV demand-side generation systems can be evaluated in a similar manner to other demand-side management technologies. The energy and capacity values of an actual PV system located in the service area of the New England Electrical System (NEES) are the two utility benefits evaluated. The annual stream of energy and capacity benefits calculated for the utility are converted to the installed cost per watt that the utility should be willing to invest to receive this benefit stream. Different discount rates are used to show the sensitivity of the allowable installed cost of the PV systems to a utility's average cost of capital. Capturing both the energy and capacity benefits of these relatively environmentally friendly distributed generators, NEES should be willing to invest in this technology when the installed cost per watt declines to ca $2.40 using NEES' rated cost of capital (8.78%). If a social discount rate of 3% is used, installation should be considered when installed cost approaches $4.70/W. Since recent installations in the Sacramento Municipal Utility District have cost between $7-8/W, cost-effective utility applications of PV are close. 22 refs., 1 fig., 2 tabs

  4. Rank dependent expected utility models of tax evasion.

    OpenAIRE

    Erling Eide

    2001-01-01

    In this paper the rank-dependent expected utility theory is substituted for the expected utility theory in models of tax evasion. It is demonstrated that the comparative statics results of the expected utility, portfolio choice model of tax evasion carry over to the more general rank dependent expected utility model.

  5. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  6. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  7. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution.

    Science.gov (United States)

    Harrison, Xavier A

    2015-01-01

    Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed

  8. Utilization of services in a randomized trial testing phone- and web-based interventions for smoking cessation.

    Science.gov (United States)

    Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E

    2011-05-01

    Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.

  9. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  10. Estimating the demand for drop-off recycling sites: a random utility travel cost approach.

    Science.gov (United States)

    Sidique, Shaufique F; Lupi, Frank; Joshi, Satish V

    2013-09-30

    Drop-off recycling is one of the most widely adopted recycling programs in the United States. Despite its wide implementation, relatively little literature addresses the demand for drop-off recycling. This study examines the demand for drop-off recycling sites as a function of travel costs and various site characteristics using the random utility model (RUM). The findings of this study indicate that increased travel costs significantly reduce the frequency of visits to drop-off sites implying that the usage pattern of a site is influenced by its location relative to where people live. This study also demonstrates that site specific characteristics such as hours of operation, the number of recyclables accepted, acceptance of commingled recyclables, and acceptance of yard-waste affect the frequency of visits to drop-off sites. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Random regret-based discrete-choice modelling: an application to healthcare.

    Science.gov (United States)

    de Bekker-Grob, Esther W; Chorus, Caspar G

    2013-07-01

    A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.

  12. An Analysis of Spherical Particles Distribution Randomly Packed in a Medium for the Monte Carlo Implicit Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Yong; Kim, Song Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this study, as a preliminary study to develop an implicit method having high accuracy, the distribution characteristics of spherical particles were evaluated by using explicit modeling techniques in various volume packing fractions. This study was performed to evaluate implicitly simulated distribution of randomly packed spheres in a medium. At first, an explicit modeling method to simulate random packed spheres in a hexahedron medium was proposed. The distributed characteristics of l{sub p} and r{sub p}, which are used in the particle position sampling, was estimated. It is analyzed that the use of the direct exponential distribution, which is generally used in the implicit modeling, can cause the distribution bias of the spheres. It is expected that the findings in this study can be utilized for improving the accuracy in using the implicit method. Spherical particles, which are randomly distributed in medium, are utilized for the radiation shields, fusion reactor blanket, fuels of VHTR reactors. Due to the difficulty on the simulation of the stochastic distribution, Monte Carlo (MC) method has been mainly considered as the tool for the analysis of the particle transport. For the MC modeling of the spherical particles, three methods are known; repeated structure, explicit modeling, and implicit modeling. Implicit method (called as the track length sampling method) is a modeling method that is the sampling based modeling technique of each spherical geometry (or track length of the sphere) during the MC simulation. Implicit modeling method has advantages in high computational efficiency and user convenience. However, it is noted that the implicit method has lower modeling accuracy in various finite mediums.

  13. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2015-07-01

    Full Text Available Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels, I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non

  14. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...

  16. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  17. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  18. New Energy Utility Business Models

    International Nuclear Information System (INIS)

    Potocnik, V.

    2016-01-01

    Recently a lot of big changes happened in the power sector: energy efficiency and renewable energy sources are quickly progressing, distributed or decentralised generation of electricity is expanding, climate change requires reduction of greenhouse gas emissions and price volatility and incertitude of fossil fuel supply is common. Those changes have led to obsolescence of vertically integrated business models which have dominated in energy utility organisations for a hundred years and new business models are being introduced. Those models take into account current changes in the power sector and enable a wider application of energy efficiency and renewable energy sources, especially for consumers, with the decentralisation of electricity generation and complying with the requirements of climate and environment preservation. New business models also solve the questions of financial compensations for utilities because of the reduction of centralised energy generation while contributing to local development and employment.(author).

  19. Deriving the expected utility of a predictive model when the utilities are uncertain.

    Science.gov (United States)

    Cooper, Gregory F; Visweswaran, Shyam

    2005-01-01

    Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.

  20. Business model innovation for sustainable energy: German utilities and renewable energy

    International Nuclear Information System (INIS)

    Richter, Mario

    2013-01-01

    The electric power sector stands at the beginning of a fundamental transformation process towards a more sustainable production based on renewable energies. Consequently, electric utilities as incumbent actors face a massive challenge to find new ways of creating, delivering, and capturing value from renewable energy technologies. This study investigates utilities' business models for renewable energies by analyzing two generic business models based on a series of in-depth interviews with German utility managers. It is found that utilities have developed viable business models for large-scale utility-side renewable energy generation. At the same time, utilities lack adequate business models to commercialize small-scale customer-side renewable energy technologies. By combining the business model concept with innovation and organization theory practical recommendations for utility mangers and policy makers are derived. - Highlights: • The energy transition creates a fundamental business model challenge for utilities. • German utilities succeed in large-scale and fail in small-scale renewable generation. • Experiences from other industries are available to inform utility managers. • Business model innovation capabilities will be crucial to master the energy transition

  1. The random walk model of intrafraction movement

    International Nuclear Information System (INIS)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-01-01

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)

  2. The random walk model of intrafraction movement.

    Science.gov (United States)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-04-07

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.

  3. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  4. Random matrix models for phase diagrams

    International Nuclear Information System (INIS)

    Vanderheyden, B; Jackson, A D

    2011-01-01

    We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied.

  5. Creating, generating and comparing random network models with NetworkRandomizer.

    Science.gov (United States)

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  6. A random spatial network model based on elementary postulates

    Science.gov (United States)

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  7. A sequential model for the structure of health care utilization.

    NARCIS (Netherlands)

    Herrmann, W.J.; Haarmann, A.; Baerheim, A.

    2017-01-01

    Traditional measurement models of health care utilization are not able to represent the complex structure of health care utilization. In this qualitative study, we, therefore, developed a new model to represent the health care utilization structure. In Norway and Germany, we conducted episodic

  8. Expected utility and catastrophic risk in a stochastic economy-climate model

    Energy Technology Data Exchange (ETDEWEB)

    Ikefuji, M. [Institute of Social and Economic Research, Osaka University, Osaka (Japan); Laeven, R.J.A.; Magnus, J.R. [Department of Econometrics and Operations Research, Tilburg University, Tilburg (Netherlands); Muris, C. [CentER, Tilburg University, Tilburg (Netherlands)

    2010-11-15

    In the context of extreme climate change, we ask how to conduct expected utility analysis in the presence of catastrophic risks. Economists typically model decision making under risk and uncertainty by expected utility with constant relative risk aversion (power utility); statisticians typically model economic catastrophes by probability distributions with heavy tails. Unfortunately, the expected utility framework is fragile with respect to heavy-tailed distributional assumptions. We specify a stochastic economy-climate model with power utility and explicitly demonstrate this fragility. We derive necessary and sufficient compatibility conditions on the utility function to avoid fragility and solve our stochastic economy-climate model for two examples of such compatible utility functions. We further develop and implement a procedure to learn the input parameters of our model and show that the model thus specified produces quite robust optimal policies. The numerical results indicate that higher levels of uncertainty (heavier tails) lead to less abatement and consumption, and to more investment, but this effect is not unlimited.

  9. A choice modelling analysis on the similarity between distribution utilities' and industrial customers' price and quality preferences

    International Nuclear Information System (INIS)

    Soederberg, Magnus

    2008-01-01

    The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)

  10. Simulating WTP Values from Random-Coefficient Models

    OpenAIRE

    Maurus Rischatsch

    2009-01-01

    Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...

  11. Random effects coefficient of determination for mixed and meta-analysis models.

    Science.gov (United States)

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  12. User-driven health care: answering multidimensional information needs in individual patients utilizing post-EBM approaches: an operational model.

    Science.gov (United States)

    Biswas, Rakesh; Maniam, Jayanthy; Lee, Edwin Wen Huo; Gopal, Premalatha; Umakanth, Shashikiran; Dahiya, Sumit; Ahmed, Sayeed

    2008-10-01

    The hypothesis in the conceptual model was that a user-driven innovation in presently available information and communication technology infrastructure would be able to meet patient and health professional users information needs and help them attain better health outcomes. An operational model was created to plan a trial on a sample diabetic population utilizing a randomized control trial design, assigning one randomly selected group of diabetics to receive electronic information intervention and analyse if it would improve their health outcomes in comparison with a matched diabetic population who would only receive regular medical intervention. Diabetes was chosen for this particular trial, as it is a major chronic illness in Malaysia as elsewhere in the world. It is in essence a position paper for how the study concept should be organized to stimulate wider discussion prior to beginning the study.

  13. A Utility Model for Teaching Load Decisions in Academic Departments.

    Science.gov (United States)

    Massey, William F.; Zemsky, Robert

    1997-01-01

    Presents a utility model for academic department decision making and describes the structural specifications for analyzing it. The model confirms the class-size utility asymmetry predicted by the authors' academic rachet theory, but shows that marginal utility associated with college teaching loads is always negative. Curricular structure and…

  14. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  15. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

    OpenAIRE

    Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

    1996-01-01

    We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

  16. A Note on the Correlated Random Coefficient Model

    DEFF Research Database (Denmark)

    Kolodziejczyk, Christophe

    In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...

  17. Stochastic Dominance under the Nonlinear Expected Utilities

    Directory of Open Access Journals (Sweden)

    Xinling Xiao

    2014-01-01

    Full Text Available In 1947, von Neumann and Morgenstern introduced the well-known expected utility and the related axiomatic system (see von Neumann and Morgenstern (1953. It is widely used in economics, for example, financial economics. But the well-known Allais paradox (see Allais (1979 shows that the linear expected utility has some limitations sometimes. Because of this, Peng proposed a concept of nonlinear expected utility (see Peng (2005. In this paper we propose a concept of stochastic dominance under the nonlinear expected utilities. We give sufficient conditions on which a random choice X stochastically dominates a random choice Y under the nonlinear expected utilities. We also provide sufficient conditions on which a random choice X strictly stochastically dominates a random choice Y under the sublinear expected utilities.

  18. User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2015-01-01

    Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.

  19. The random field Blume-Capel model revisited

    Science.gov (United States)

    Santos, P. V.; da Costa, F. A.; de Araújo, J. M.

    2018-04-01

    We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.

  20. Random regression models for detection of gene by environment interaction

    Directory of Open Access Journals (Sweden)

    Meuwissen Theo HE

    2007-02-01

    Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.

  1. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    Science.gov (United States)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This

  2. Maintenance overtime policies in reliability theory models with random working cycles

    CERN Document Server

    Nakagawa, Toshio

    2015-01-01

    This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...

  3. Emergent randomness in the Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Garraway, B M; Stenholm, S

    2008-01-01

    We consider the well-known Jaynes-Cummings model and ask if it can display randomness. As a solvable Hamiltonian system, it does not display chaotic behaviour in the ordinary sense. Here, however, we look at the distribution of values taken up during the total time evolution. This evolution is determined by the eigenvalues distributed as the square roots of integers and leads to a seemingly erratic behaviour. That this may display a random Gaussian value distribution is suggested by an exactly provable result by Kac. In order to reach our conclusion we use the Kac model to develop tests for the emergence of a Gaussian. Even if the consequent double limits are difficult to evaluate numerically, we find definite indications that the Jaynes-Cummings case also produces a randomness in its value distributions. Numerical methods do not establish such a result beyond doubt, but our conclusions are definite enough to suggest strongly an unexpected randomness emerging in a dynamic time evolution

  4. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    Science.gov (United States)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  5. A UTILITY THEORY OF OLD AGE.

    Science.gov (United States)

    HAMLIN, ROY M.

    HERZBERG'S JOB SATISFACTION MODEL SERVES AS THE BASIS FOR AN ANALYSIS OF OLD AGE. THE PATTERN VARIES AMONG INDIVIDUALS, BUT THE CAPACITY FOR ORGANIZED BEHAVIOR RATHER THAN RANDOM STRESS REDUCTION SUPPLIES EACH INDIVIDUAL WITH A TASK. THE HYPOTHESIS IS THAT IF THE OLDER INDIVIDUAL REALIZES UTILITY IN HIS YEARS BEYOND 70, HE WILL RETAIN COMPETENCE…

  6. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  7. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  8. Asset transformation and the challenges to servitize a utility business model

    International Nuclear Information System (INIS)

    Helms, Thorsten

    2016-01-01

    The traditional energy utility business model is under pressure, and energy services are expected to play an important role for the energy transition. Experts and scholars argue that utilities need to innovate their business models, and transform from commodity suppliers to service providers. The transition from a product-oriented, capital-intensive business model based on tangible assets, towards a service-oriented, expense-intensive business model based on intangible assets may present great managerial and organizational challenges. Little research exists about such transitions for capital-intensive commodity providers, and particularly energy utilities, where the challenges to servitize are expected to be greatest. This qualitative paper explores the barriers to servitization within selected Swiss and German utility companies through a series of interviews with utility managers. One of them is ‘asset transformation’, the shift from tangible to intangible assets as major input factor for the value proposition, which is proposed as a driver for the complexity of business model transitions. Managers need to carefully manage those challenges, and find ways to operate both new service and established utility business models aside. Policy makers can support the transition of utilities through more favorable regulatory frameworks for energy services, and by supporting the exchange of knowledge in the industry. - Highlights: •The paper analyses the expected transformation of utilities into service-providers. •Service and utility business models possess very different attributes. •The former is based on intangible, the latter on tangible assets. •The transformation into a service-provider is related with great challenges. •Asset transformation is proposed as a barrier for business model innovation.

  9. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  10. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  11. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  12. A Solution Method for Linear and Geometrically Nonlinear MDOF Systems with Random Properties subject to Random Excitation

    DEFF Research Database (Denmark)

    Micaletti, R. C.; Cakmak, A. S.; Nielsen, Søren R. K.

    structural properties. The resulting state-space formulation is a system of ordinary stochastic differential equations with random coefficient and deterministic initial conditions which are subsequently transformed into ordinary stochastic differential equations with deterministic coefficients and random......A method for computing the lower-order moments of randomly-excited multi-degree-of-freedom (MDOF) systems with random structural properties is proposed. The method is grounded in the techniques of stochastic calculus, utilizing a Markov diffusion process to model the structural system with random...... initial conditions. This transformation facilitates the derivation of differential equations which govern the evolution of the unconditional statistical moments of response. Primary consideration is given to linear systems and systems with odd polynomial nonlinearities, for in these cases...

  13. A random field model for the estimation of seismic hazard. Final report for the period 1 January 1990 - 31 December 1990

    International Nuclear Information System (INIS)

    Yucemen, S.

    1991-02-01

    The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs

  14. A random field model for the estimation of seismic hazard. Final report for the period 1 January 1990 - 31 December 1990

    Energy Technology Data Exchange (ETDEWEB)

    Yucemen, S [Middle East Technical Univ., Ankara (Turkey). Dept. of Statistics

    1991-02-01

    The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs.

  15. Compensatory and non-compensatory multidimensional randomized item response models

    NARCIS (Netherlands)

    Fox, J.P.; Entink, R.K.; Avetisyan, M.

    2014-01-01

    Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been

  16. Micromechanical Modeling of Fiber-Reinforced Composites with Statistically Equivalent Random Fiber Distribution

    Directory of Open Access Journals (Sweden)

    Wenzhi Wang

    2016-07-01

    Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.

  17. A cluster expansion approach to exponential random graph models

    International Nuclear Information System (INIS)

    Yin, Mei

    2012-01-01

    The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

  18. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  19. Randomized Item Response Theory Models

    NARCIS (Netherlands)

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  20. 93-106, 2015 93 Multilevel random effect and marginal models

    African Journals Online (AJOL)

    Multilevel random effect and marginal models for longitudinal data ... and random effect models that take the correlation among measurements of the same subject ... comparing the level of redness, pain and irritability ... clinical trial evaluating the safety profile of a new .... likelihood-based methods to compare models and.

  1. Ising model of a randomly triangulated random surface as a definition of fermionic string theory

    International Nuclear Information System (INIS)

    Bershadsky, M.A.; Migdal, A.A.

    1986-01-01

    Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)

  2. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  3. A prospective randomized trial examining health care utilization in individuals using multiple smartphone-enabled biosensors

    Directory of Open Access Journals (Sweden)

    Cinnamon S. Bloss

    2016-01-01

    Full Text Available Background. Mobile health and digital medicine technologies are becoming increasingly used by individuals with common, chronic diseases to monitor their health. Numerous devices, sensors, and apps are available to patients and consumers–some of which have been shown to lead to improved health management and health outcomes. However, no randomized controlled trials have been conducted which examine health care costs, and most have failed to provide study participants with a truly comprehensive monitoring system. Methods. We conducted a prospective randomized controlled trial of adults who had submitted a 2012 health insurance claim associated with hypertension, diabetes, and/or cardiac arrhythmia. The intervention involved receipt of one or more mobile devices that corresponded to their condition(s (hypertension: Withings Blood Pressure Monitor; diabetes: Sanofi iBGStar Blood Glucose Meter; arrhythmia: AliveCor Mobile ECG and an iPhone with linked tracking applications for a period of 6 months; the control group received a standard disease management program. Moreover, intervention study participants received access to an online health management system which provided participants detailed device tracking information over the course of the study. This was a monitoring system designed by leveraging collaborations with device manufacturers, a connected health leader, health care provider, and employee wellness program–making it both unique and inclusive. We hypothesized that health resource utilization with respect to health insurance claims may be influenced by the monitoring intervention. We also examined health-self management. Results & Conclusions. There was little evidence of differences in health care costs or utilization as a result of the intervention. Furthermore, we found evidence that the control and intervention groups were equivalent with respect to most health care utilization outcomes. This result suggests there are not large

  4. Recent advances in modeling nutrient utilization in ruminants.

    Science.gov (United States)

    Kebreab, E; Dijkstra, J; Bannink, A; France, J

    2009-04-01

    Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, postabsorptive metabolism, and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient utilization and excretion with respect to environmental issues. Accurate prediction of fermentation stoichiometry has an impact on estimating the type of energy-yielding substrate available to the animal, and the ratio of lipogenic to glucogenic VFA is an important determinant of methanogenesis. Recent advances in modeling VFA stoichiometry offer ways for dietary manipulation to shift the fermentation in favor of glucogenic VFA. Increasing energy to the animal by supplementing with starch can lead to health problems such as subacute rumen acidosis caused by rumen pH depression. Mathematical models have been developed to describe changes in rumen pH and rumen fermentation. Models that relate rumen temperature to rumen pH have also been developed and have the potential to aid in the diagnosis of subacute rumen acidosis. The effect of pH has been studied mechanistically, and in such models, fractional passage rate has a large impact on substrate degradation and microbial efficiency in the rumen and should be an important theme in future studies. The efficiency with which energy is utilized by ruminants has been updated in recent studies. Mechanistic models of N utilization indicate that reducing dietary protein concentration, matching protein degradability to the microbial requirement, and increasing the energy status of the animal will reduce the output of N as waste. Recent mechanistic P models calculate the P requirement by taking into account P recycled through saliva and endogenous losses. Mechanistic P models suggest reducing current P amounts for lactating dairy cattle to at least 0.35% P in the diet, with a potential reduction of up to 1.3 kt/yr. A model that

  5. Discrete choice models with multiplicative error terms

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Bierlaire, Michel

    2009-01-01

    The conditional indirect utility of many random utility maximization (RUM) discrete choice models is specified as a sum of an index V depending on observables and an independent random term ε. In general, the universe of RUM consistent models is much larger, even fixing some specification of V due...

  6. Blastocyst utilization rates after continuous culture in two commercial single-step media: a prospective randomized study with sibling oocytes.

    Science.gov (United States)

    Sfontouris, Ioannis A; Kolibianakis, Efstratios M; Lainas, George T; Venetis, Christos A; Petsas, George K; Tarlatzis, Basil C; Lainas, Tryfon G

    2017-10-01

    The aim of this study is to determine whether blastocyst utilization rates are different after continuous culture in two different commercial single-step media. This is a paired randomized controlled trial with sibling oocytes conducted in infertility patients, aged ≤40 years with ≥10 oocytes retrieved assigned to blastocyst culture and transfer. Retrieved oocytes were randomly allocated to continuous culture in either Sage one-step medium (Origio) or Continuous Single Culture (CSC) medium (Irvine Scientific) without medium renewal up to day 5 post oocyte retrieval. Main outcome measure was the proportion of embryos suitable for clinical use (utilization rate). A total of 502 oocytes from 33 women were randomly allocated to continuous culture in either Sage one-step medium (n = 250) or CSC medium (n = 252). Fertilization was performed by either in vitro fertilization or intracytoplasmic sperm injection, and embryo transfers were performed on day 5. Two patients had all blastocysts frozen due to the occurrence of severe ovarian hyperstimulation syndrome. Fertilization and cleavage rates, as well as embryo quality on day 3, were similar in the two media. Blastocyst utilization rates (%, 95% CI) [55.4% (46.4-64.1) vs 54.7% (44.9-64.6), p = 0.717], blastocyst formation rates [53.6% (44.6-62.5) vs 51.9 (42.2-61.6), p = 0.755], and proportion of good quality blastocysts [36.8% (28.1-45.4) vs 36.1% (27.2-45.0), p = 0.850] were similar in Sage one-step and CSC media, respectively. Continuous culture of embryos in Sage one-step and CSC media is associated with similar blastocyst development and utilization rates. Both single-step media appear to provide adequate support during in vitro preimplantation embryo development. Whether these observations are also valid for other continuous single medium protocols remains to be determined. NCT02302638.

  7. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    Science.gov (United States)

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  8. The hard-core model on random graphs revisited

    International Nuclear Information System (INIS)

    Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka

    2013-01-01

    We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming

  9. Application of Poisson random effect models for highway network screening.

    Science.gov (United States)

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Random matrix model for disordered conductors

    Indian Academy of Sciences (India)

    In the interpretation of transport properties of mesoscopic systems, the multichannel ... One defines the random matrix model with N eigenvalues 0. λТ ..... With heuristic arguments, using the ideas pertaining to Dyson Coulomb gas analogy,.

  11. Analog model for quantum gravity effects: phonons in random fluids.

    Science.gov (United States)

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  12. Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.

    2009-06-01

    This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.

  13. Modeling random combustion of lycopodium particles and gas

    Directory of Open Access Journals (Sweden)

    M Bidabadi

    2016-06-01

    Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.

  14. Quincke random walkers

    Science.gov (United States)

    Pradillo, Gerardo; Heintz, Aneesh; Vlahovska, Petia

    2017-11-01

    The spontaneous rotation of a sphere in an applied uniform DC electric field (Quincke effect) has been utilized to engineer self-propelled particles: if the sphere is initially resting on a surface, it rolls. The Quincke rollers have been widely used as a model system to study collective behavior in ``active'' suspensions. If the applied field is DC, an isolated Quincke roller follows a straight line trajectory. In this talk, we discuss the design of a Quincke roller that executes a random-walk-like behavior. We utilize AC field - upon reversal of the field direction a fluctuation in the axis of rotation (which is degenerate in the plane perpendicular to the field and parallel to the surface) introduces randomness in the direction of motion. The MSD of an isolated Quincke walker depends on frequency, amplitude, and waveform of the electric field. Experiment and theory are compared. We also investigate the collective behavior of Quincke walkers,the transport of inert particles in a bath of Quincke walkers, and the spontaneous motion of a drop containing Quincke active particle. supported by NSF Grant CBET 1437545.

  15. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  16. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  17. A generalized model via random walks for information filtering

    International Nuclear Information System (INIS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-01-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  18. A random energy model for size dependence : recurrence vs. transience

    NARCIS (Netherlands)

    Külske, Christof

    1998-01-01

    We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as

  19. mathematical models for estimating radio channels utilization

    African Journals Online (AJOL)

    2017-08-08

    Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.

  20. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    Science.gov (United States)

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  1. Effects of random noise in a dynamical model of love

    Energy Technology Data Exchange (ETDEWEB)

    Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2011-07-15

    Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

  2. Effects of random noise in a dynamical model of love

    International Nuclear Information System (INIS)

    Xu Yong; Gu Rencai; Zhang Huiqing

    2011-01-01

    Highlights: → We model the complexity and unpredictability of psychology as Gaussian white noise. → The stochastic system of love is considered including bifurcation and chaos. → We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

  3. Using Random Forest Models to Predict Organizational Violence

    Science.gov (United States)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  4. Alternative model of random surfaces

    International Nuclear Information System (INIS)

    Ambartzumian, R.V.; Sukiasian, G.S.; Savvidy, G.K.; Savvidy, K.G.

    1992-01-01

    We analyse models of triangulated random surfaces and demand that geometrically nearby configurations of these surfaces must have close actions. The inclusion of this principle drives us to suggest a new action, which is a modified Steiner functional. General arguments, based on the Minkowski inequality, shows that the maximal distribution to the partition function comes from surfaces close to the sphere. (orig.)

  5. Premium Pricing of Liability Insurance Using Random Sum Model

    Directory of Open Access Journals (Sweden)

    Mujiati Dwi Kartikasari

    2017-03-01

    Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution

  6. Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites

    Science.gov (United States)

    Olekhno, N. A.; Beltukov, Y. M.

    2018-05-01

    Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0

  7. Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features

    Directory of Open Access Journals (Sweden)

    Sirinoot Boonsuk

    2014-01-01

    Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.

  8. Utility of the sore throat pain model in a multiple-dose assessment of the acute analgesic flurbiprofen: a randomized controlled study.

    Science.gov (United States)

    Schachtel, Bernard; Aspley, Sue; Shephard, Adrian; Shea, Timothy; Smith, Gary; Schachtel, Emily

    2014-07-03

    The sore throat pain model has been conducted by different clinical investigators to demonstrate the efficacy of acute analgesic drugs in single-dose randomized clinical trials. The model used here was designed to study the multiple-dose safety and efficacy of lozenges containing flurbiprofen at 8.75 mg. Adults (n=198) with moderate or severe acute sore throat and findings of pharyngitis on a Tonsillo-Pharyngitis Assessment (TPA) were randomly assigned to use either flurbiprofen 8.75 mg lozenges (n=101) or matching placebo lozenges (n=97) under double-blind conditions. Patients sucked one lozenge every three to six hours as needed, up to five lozenges per day, and rated symptoms on 100-mm scales: the Sore Throat Pain Intensity Scale (STPIS), the Difficulty Swallowing Scale (DSS), and the Swollen Throat Scale (SwoTS). Reductions in pain (lasting for three hours) and in difficulty swallowing and throat swelling (for four hours) were observed after a single dose of the flurbiprofen 8.75 mg lozenge (Pflurbiprofen-treated patients experienced a 59% greater reduction in throat pain, 45% less difficulty swallowing, and 44% less throat swelling than placebo-treated patients (all Pflurbiprofen 8.75 mg lozenges were shown to be an effective, well-tolerated treatment for sore throat pain. Other pharmacologic actions (reduced difficulty swallowing and reduced throat swelling) and overall patient satisfaction from the flurbiprofen lozenges were also demonstrated in this multiple-dose implementation of the sore throat pain model. This trial was registered with ClinicalTrials.gov, registration number: NCT01048866, registration date: January 13, 2010.

  9. A catastrophe model for the prospect-utility theory question.

    Science.gov (United States)

    Oliva, Terence A; McDade, Sean R

    2008-07-01

    Anomalies have played a big part in the analysis of decision making under risk. Both expected utility and prospect theories were born out of anomalies exhibited by actual decision making behavior. Since the same individual can use both expected utility and prospect approaches at different times, it seems there should be a means of uniting the two. This paper turns to nonlinear dynamical systems (NDS), specifically a catastrophe model, to help suggest an 'out of the box' line of solution toward integration. We use a cusp model to create a value surface whose control dimensions are involvement and gains versus losses. By including 'involvement' as a variable the importance of the individual's psychological state is included, and it provides a rationale for how decision makers' changes from expected utility to prospect might occur. Additionally, it provides a possible explanation for what appears to be even more irrational decisions that individuals make when highly emotionally involved. We estimate the catastrophe model using a sample of 997 gamblers who attended a casino and compare it to the linear model using regression. Hence, we have actual data from individuals making real bets, under real conditions.

  10. Nonlinear optical spectroscopy and microscopy of model random and biological media

    Science.gov (United States)

    Guo, Yici

    Nonlinear optical (NLO) spectroscopy and microscopy applied to biomedical science are emerging as new and rapidly growing areas which offer important insight into basic phenomena. Ultrafast NLO processes provide temporal, spectral and spatial sensitivities complementary or superior to those achieved through conventional linear optical approaches. The goal of this thesis is to explore the potential of two fundamental NLO processes to produce noninvasive histological maps of biological tissues. Within the goal of the thesis, steady state intensity, polarization and angular measurements of second- and third-harmonic generations (SHG, THG) have been performed on model random scattering and animal tissue samples. The nonlinear optical effects have been evaluated using models. Conversion efficiencies of SHG and THG from animal tissue interfaces have been determined, ranging from 10-7 to 10-10. The changes in the multiharmonic signals were found to depend on both local and overall histological structures of biological samples. The spectral signatures of two photon excitation induced fluorescence from intrinsic fluorophores have been acquired and used to characterize the physical state and types of tissues. Two dimensional scanning SHG and TPF tomographic images have been obtained from in vitro animal tissues, normal and diseased human breast tissues, and resolved subsurface layers and histo-chemical distributions. By combining consecutive 2D maps, a 3D image can be produced. The structure and morphology dependence of the SH signal has been utilized to image and evaluate subsurface tumor progression depth. Second harmonic microscopy in model random and biological cells has been studied using a CCD camera to obtain direct images from subcellular structures. Finally, near infrared (NIR) NLO spectroscopy and microscopy based on SHG and TPF have demonstrated high spatial resolution, deeper penetration depth, low level photo-damaging and enhanced morphological sensitivity for

  11. Recent developments in exponential random graph (p*) models for social networks

    NARCIS (Netherlands)

    Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa

    This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over

  12. Subjective Expected Utility: A Model of Decision-Making.

    Science.gov (United States)

    Fischoff, Baruch; And Others

    1981-01-01

    Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)

  13. Method and apparstus for determining random coincidence count rate in a scintillation counter utilizing the coincidence technique

    International Nuclear Information System (INIS)

    Horrocks, D.L.

    1980-01-01

    A method and apparatus for the reliable determination of a random coincidence count attributable to chance coincidences of single-photon events which are each detected in only a single detector of a scintillation counter utilizing two detectors in a coincidence counting technique are described. A firstdelay device is employed to delay output pulses from one detector, and then the delayed signal is compared with the undelayed signal from the other detector in a coincidence circuit, to obtain an approximate random coincidence count. The output of the coincidence circuit is applied to an anti-coincidence circuit, where it is corrected by elimination of pulses coincident with, and attributable to, conventionally detected real coincidences, and by elimination of pulses coincident with, and attributable to, real coincidences that have been delayed by a second delay device having the same time parameter as the first. 8 claims

  14. Evolution of the concentration PDF in random environments modeled by global random walk

    Science.gov (United States)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and

  15. A utility-theoretic model for QALYs and willingness to pay.

    Science.gov (United States)

    Klose, Thomas

    2003-01-01

    Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.

  16. Disorder Identification in Hysteresis Data: Recognition Analysis of the Random-Bond-Random-Field Ising Model

    International Nuclear Information System (INIS)

    Ovchinnikov, O. S.; Jesse, S.; Kalinin, S. V.; Bintacchit, P.; Trolier-McKinstry, S.

    2009-01-01

    An approach for the direct identification of disorder type and strength in physical systems based on recognition analysis of hysteresis loop shape is developed. A large number of theoretical examples uniformly distributed in the parameter space of the system is generated and is decorrelated using principal component analysis (PCA). The PCA components are used to train a feed-forward neural network using the model parameters as targets. The trained network is used to analyze hysteresis loops for the investigated system. The approach is demonstrated using a 2D random-bond-random-field Ising model, and polarization switching in polycrystalline ferroelectric capacitors.

  17. A random walk model for evaluating clinical trials involving serial observations.

    Science.gov (United States)

    Hopper, J L; Young, G P

    1988-05-01

    For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.

  18. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  19. Smooth random change point models.

    Science.gov (United States)

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  20. A random forest algorithm for nowcasting of intense precipitation events

    Science.gov (United States)

    Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh

    2017-09-01

    Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.

  1. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  2. Simulating intrafraction prostate motion with a random walk model.

    Science.gov (United States)

    Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O

    2017-01-01

    Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during

  3. Money creation process in a random redistribution model

    Science.gov (United States)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  4. Expected utility without utility

    OpenAIRE

    Castagnoli, E.; Licalzi, M.

    1996-01-01

    This paper advances an interpretation of Von Neumann–Morgenstern’s expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered....

  5. Stochastic equilibria of an asset pricing model with heterogeneous beliefs and random dividends

    NARCIS (Netherlands)

    Zhu, M.; Wang, D.; Guo, M.

    2011-01-01

    We investigate dynamical properties of a heterogeneous agent model with random dividends and further study the relationship between dynamical properties of the random model and those of the corresponding deterministic skeleton, which is obtained by setting the random dividends as their constant mean

  6. Utilizing random Forest QSAR models with optimized parameters for target identification and its application to target-fishing server.

    Science.gov (United States)

    Lee, Kyoungyeul; Lee, Minho; Kim, Dongsup

    2017-12-28

    The identification of target molecules is important for understanding the mechanism of "target deconvolution" in phenotypic screening and "polypharmacology" of drugs. Because conventional methods of identifying targets require time and cost, in-silico target identification has been considered an alternative solution. One of the well-known in-silico methods of identifying targets involves structure activity relationships (SARs). SARs have advantages such as low computational cost and high feasibility; however, the data dependency in the SAR approach causes imbalance of active data and ambiguity of inactive data throughout targets. We developed a ligand-based virtual screening model comprising 1121 target SAR models built using a random forest algorithm. The performance of each target model was tested by employing the ROC curve and the mean score using an internal five-fold cross validation. Moreover, recall rates for top-k targets were calculated to assess the performance of target ranking. A benchmark model using an optimized sampling method and parameters was examined via external validation set. The result shows recall rates of 67.6% and 73.9% for top-11 (1% of the total targets) and top-33, respectively. We provide a website for users to search the top-k targets for query ligands available publicly at http://rfqsar.kaist.ac.kr . The target models that we built can be used for both predicting the activity of ligands toward each target and ranking candidate targets for a query ligand using a unified scoring scheme. The scores are additionally fitted to the probability so that users can estimate how likely a ligand-target interaction is active. The user interface of our web site is user friendly and intuitive, offering useful information and cross references.

  7. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    Science.gov (United States)

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  8. Quantum random oracle model for quantum digital signature

    Science.gov (United States)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  9. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  10. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    Science.gov (United States)

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  11. Random isotropic one-dimensional XY-model

    Science.gov (United States)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  12. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).

  13. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  14. Sustainable geothermal utilization - Case histories; definitions; research issues and modelling

    International Nuclear Information System (INIS)

    Axelsson, Gudni

    2010-01-01

    Sustainable development by definition meets the needs of the present without compromising the ability of future generations to meet their own needs. The Earth's enormous geothermal resources have the potential to contribute significantly to sustainable energy use worldwide as well as to help mitigate climate change. Experience from the use of numerous geothermal systems worldwide lasting several decades demonstrates that by maintaining production below a certain limit the systems reach a balance between net energy discharge and recharge that may be maintained for a long time (100-300 years). Modelling studies indicate that the effect of heavy utilization is often reversible on a time-scale comparable to the period of utilization. Thus, geothermal resources can be used in a sustainable manner either through (1) constant production below the sustainable limit, (2) step-wise increase in production, (3) intermittent excessive production with breaks, and (4) reduced production after a shorter period of heavy production. The long production histories that are available for low-temperature as well as high-temperature geothermal systems distributed throughout the world, provide the most valuable data available for studying sustainable management of geothermal resources, and reservoir modelling is the most powerful tool available for this purpose. The paper presents sustainability modelling studies for the Hamar and Nesjavellir geothermal systems in Iceland, the Beijing Urban system in China and the Olkaria system in Kenya as examples. Several relevant research issues have also been identified, such as the relevance of system boundary conditions during long-term utilization, how far reaching interference from utilization is, how effectively geothermal systems recover after heavy utilization and the reliability of long-term (more than 100 years) model predictions. (author)

  15. Target-oriented utility theory for modeling the deterrent effects of counterterrorism

    International Nuclear Information System (INIS)

    Bier, Vicki M.; Kosanoglu, Fuat

    2015-01-01

    Optimal resource allocation in security has been a significant challenge for critical infrastructure protection. Numerous studies use game theory as the method of choice, because of the fact that an attacker can often observe the defender’s investment in security and adapt his choice of strategies accordingly. However, most of these models do not explicitly consider deterrence, with the result that they may lead to wasted resources if less investment would be sufficient to deter an attack. In this paper, we assume that the defender is uncertain about the level of defensive investment that would deter an attack, and use the target-oriented utility to optimize the level of defensive investment, taking into account the probability of deterrence. - Highlights: • We propose a target-oriented utility model for attacker deterrence. • We model attack deterrence as a function of attacker success probability. • We compare target-oriented utility model and conventional game-theoretical model. • Results show that our model results better value of the defender’s objective function. • Results support that defending series systems is more difficult than parallel systems

  16. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  17. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...

  18. Lamplighter model of a random copolymer adsorption on a line

    Directory of Open Access Journals (Sweden)

    L.I. Nazarov

    2014-09-01

    Full Text Available We present a model of an AB-diblock random copolymer sequential self-packaging with local quenched interactions on a one-dimensional infinite sticky substrate. It is assumed that the A-A and B-B contacts are favorable, while A-B are not. The position of a newly added monomer is selected in view of the local contact energy minimization. The model demonstrates a self-organization behavior with the nontrivial dependence of the total energy, E (the number of unfavorable contacts, on the number of chain monomers, N: E ~ N^3/4 for quenched random equally probable distribution of A- and B-monomers along the chain. The model is treated by mapping it onto the "lamplighter" random walk and the diffusion-controlled chemical reaction of X+X → 0 type with the subdiffusive motion of reagents.

  19. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  20. Annealed central limit theorems for the ising model on random graphs

    NARCIS (Netherlands)

    Giardinà, C.; Giberti, C.; van der Hofstad, R.W.; Prioriello, M.L.

    2016-01-01

    The aim of this paper is to prove central limit theorems with respect to the annealed measure for the magnetization rescaled by √N of Ising models on random graphs. More precisely, we consider the general rank-1 inhomogeneous random graph (or generalized random graph), the 2-regular configuration

  1. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    2009-01-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  2. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...

  3. Structure-based Markov random field model for representing evolutionary constraints on functional sites.

    Science.gov (United States)

    Jeong, Chan-Seok; Kim, Dongsup

    2016-02-24

    Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.

  4. Predicting Health Care Utilization After Behavioral Health Referral Using Natural Language Processing and Machine Learning

    OpenAIRE

    Roysden, Nathaniel; Wright, Adam

    2015-01-01

    Mental health problems are an independent predictor of increased healthcare utilization. We created random forest classifiers for predicting two outcomes following a patient’s first behavioral health encounter: decreased utilization by any amount (AUROC 0.74) and ultra-high absolute utilization (AUROC 0.88). These models may be used for clinical decision support by referring providers, to automatically detect patients who may benefit from referral, for cost management, or for risk/protection ...

  5. Predicting Health Care Utilization After Behavioral Health Referral Using Natural Language Processing and Machine Learning.

    Science.gov (United States)

    Roysden, Nathaniel; Wright, Adam

    2015-01-01

    Mental health problems are an independent predictor of increased healthcare utilization. We created random forest classifiers for predicting two outcomes following a patient's first behavioral health encounter: decreased utilization by any amount (AUROC 0.74) and ultra-high absolute utilization (AUROC 0.88). These models may be used for clinical decision support by referring providers, to automatically detect patients who may benefit from referral, for cost management, or for risk/protection factor analysis.

  6. A single-level random-effects cross-lagged panel model for longitudinal mediation analysis.

    Science.gov (United States)

    Wu, Wei; Carroll, Ian A; Chen, Po-Yi

    2017-12-06

    Cross-lagged panel models (CLPMs) are widely used to test mediation with longitudinal panel data. One major limitation of the CLPMs is that the model effects are assumed to be fixed across individuals. This assumption is likely to be violated (i.e., the model effects are random across individuals) in practice. When this happens, the CLPMs can potentially yield biased parameter estimates and misleading statistical inferences. This article proposes a model named a random-effects cross-lagged panel model (RE-CLPM) to account for random effects in CLPMs. Simulation studies show that the RE-CLPM outperforms the CLPM in recovering the mean indirect and direct effects in a longitudinal mediation analysis when random effects exist in the population. The performance of the RE-CLPM is robust to a certain degree, even when the random effects are not normally distributed. In addition, the RE-CLPM does not produce harmful results when the model effects are in fact fixed in the population. Implications of the simulation studies and potential directions for future research are discussed.

  7. The reverse effects of random perturbation on discrete systems for single and multiple population models

    International Nuclear Information System (INIS)

    Kang, Li; Tang, Sanyi

    2016-01-01

    Highlights: • The discrete single species and multiple species models with random perturbation are proposed. • The complex dynamics and interesting bifurcation behavior have been investigated. • The reverse effects of random perturbation on discrete systems have been discussed and revealed. • The main results can be applied for pest control and resources management. - Abstract: The natural species are likely to present several interesting and complex phenomena under random perturbations, which have been confirmed by simple mathematical models. The important questions are: how the random perturbations influence the dynamics of the discrete population models with multiple steady states or multiple species interactions? and is there any different effects for single species and multiple species models with random perturbation? To address those interesting questions, we have proposed the discrete single species model with two stable equilibria and the host-parasitoid model with Holling type functional response functions to address how the random perturbation affects the dynamics. The main results indicate that the random perturbation does not change the number of blurred orbits of the single species model with two stable steady states compared with results for the classical Ricker model with same random perturbation, but it can strength the stability. However, extensive numerical investigations depict that the random perturbation does not influence the complexities of the host-parasitoid models compared with the results for the models without perturbation, while it does increase the period of periodic orbits doubly. All those confirm that the random perturbation has a reverse effect on the dynamics of the discrete single and multiple population models, which could be applied in reality including pest control and resources management.

  8. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches

    International Nuclear Information System (INIS)

    Wang, Jian-Xun; Sun, Rui; Xiao, Heng

    2016-01-01

    Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past

  9. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    Science.gov (United States)

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  10. Kinetic models of cell growth, substrate utilization and bio ...

    African Journals Online (AJOL)

    Bio-decolorization kinetic studies of distillery effluent in a batch culture were conducted using Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, Leudeking-Piret kinetics for bio-decolorization, and also for substrate utilization. The proposed models appeared to provide a suitable ...

  11. Some Limits Using Random Slope Models to Measure Academic Growth

    Directory of Open Access Journals (Sweden)

    Daniel B. Wright

    2017-11-01

    Full Text Available Academic growth is often estimated using a random slope multilevel model with several years of data. However, if there are few time points, the estimates can be unreliable. While using random slope multilevel models can lower the variance of the estimates, these procedures can produce more highly erroneous estimates—zero and negative correlations with the true underlying growth—than using ordinary least squares estimates calculated for each student or school individually. An example is provided where schools with increasing graduation rates are estimated to have negative growth and vice versa. The estimation is worse when the underlying data are skewed. It is recommended that there are at least six time points for estimating growth if using a random slope model. A combination of methods can be used to avoid some of the aberrant results if it is not possible to have six or more time points.

  12. Randomness determines practical security of BB84 quantum key distribution

    Science.gov (United States)

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  13. Coupling model of energy consumption with changes in environmental utility

    International Nuclear Information System (INIS)

    He Hongming; Jim, C.Y.

    2012-01-01

    This study explores the relationships between metropolis energy consumption and environmental utility changes by a proposed Environmental Utility of Energy Consumption (EUEC) model. Based on the dynamic equilibrium of input–output economics theory, it considers three simulation scenarios: fixed-technology, technological-innovation, and green-building effect. It is applied to analyse Hong Kong in 1980–2007. Continual increase in energy consumption with rapid economic growth degraded environmental utility. First, energy consumption at fixed-technology was determined by economic outcome. In 1990, it reached a critical balanced state when energy consumption was 22×10 9 kWh. Before 1990 (x 1 9 kWh), rise in energy consumption improved both economic development and environmental utility. After 1990 (x 1 >22×10 9 kWh), expansion of energy consumption facilitated socio-economic development but suppressed environmental benefits. Second, technological-innovation strongly influenced energy demand and improved environmental benefits. The balanced state remained in 1999 when energy consumption reached 32.33×10 9 kWh. Technological-innovation dampened energy consumption by 12.99%, exceeding the fixed-technology condition. Finally, green buildings reduced energy consumption by an average of 17.5% in 1990–2007. They contributed significantly to energy saving, and buffered temperature fluctuations between external and internal environment. The case investigations verified the efficiency of the EUEC model, which can effectively evaluate the interplay of energy consumption and environmental quality. - Highlights: ► We explore relationships between metropolis energy consumption and environmental utility. ► An Environmental Utility of Energy Consumption (EUEC) model is proposed. ► Technological innovation mitigates energy consumption impacts on environmental quality. ► Technological innovation decreases demand of energy consumption more than fixed technology scenario

  14. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  15. Method of model reduction and multifidelity models for solute transport in random layered porous media

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    2017-09-01

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.

  16. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    Science.gov (United States)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  17. New constraints on modelling the random magnetic field of the MW

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Marcus C.; Nielaba, Peter [Department of Physics, University of Konstanz, Universitätsstr. 10, D-78457 Konstanz (Germany); Beck, Alexander M.; Dolag, Klaus [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany); Beck, Rainer [Max Planck Institute for Radioastronomy, Auf dem Hügel 69, D-53121 Bonn (Germany); Strong, Andrew W., E-mail: marcus.beck@uni-konstanz.de, E-mail: abeck@usm.uni-muenchen.de, E-mail: rbeck@mpifr-bonn.mpg.de, E-mail: dolag@usm.uni-muenchen.de, E-mail: aws@mpe.mpg.de, E-mail: peter.nielaba@uni-konstanz.de [Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, D-85748 Garching (Germany)

    2016-05-01

    We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson and Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of −2.8±7.1 rad/m{sup 2} and 04.4±11. rad/m{sup 2} for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.

  18. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    International Nuclear Information System (INIS)

    Fujimoto, Kazufumi; Nagai, Hideo; Runggaldier, Wolfgang J.

    2013-01-01

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  19. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp [Bank of Tokyo-Mitsubishi UFJ, Ltd., Corporate Risk Management Division (Japan); Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp [Osaka University, Division of Mathematical Science for Social Systems, Graduate School of Engineering Science (Japan); Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it [Universita di Padova, Dipartimento di Matematica Pura ed Applicata (Italy)

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  20. A random walk model to evaluate autism

    Science.gov (United States)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  1. Connectivity ranking of heterogeneous random conductivity models

    Science.gov (United States)

    Rizzo, C. B.; de Barros, F.

    2017-12-01

    To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

  2. [Home health resource utilization measures using a case-mix adjustor model].

    Science.gov (United States)

    You, Sun-Ju; Chang, Hyun-Sook

    2005-08-01

    The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.

  3. Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.

    Science.gov (United States)

    Vickers, D; Smith, P

    1985-01-01

    In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.

  4. A cellular automata model of traffic flow with variable probability of randomization

    International Nuclear Information System (INIS)

    Zheng Wei-Fan; Zhang Ji-Ye

    2015-01-01

    Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow–density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. (paper)

  5. MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION

    Directory of Open Access Journals (Sweden)

    M. Ahmadlou

    2016-06-01

    Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

  6. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    Science.gov (United States)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  7. Autoregressive Modeling of Drift and Random Error to Characterize a Continuous Intravascular Glucose Monitoring Sensor.

    Science.gov (United States)

    Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J

    2018-01-01

    Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.

  8. Risk Decision Making Model for Reservoir Floodwater resources Utilization

    Science.gov (United States)

    Huang, X.

    2017-12-01

    Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.

  9. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    Science.gov (United States)

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  10. Joint modeling of ChIP-seq data via a Markov random field model

    NARCIS (Netherlands)

    Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

    Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for

  11. Generalized Whittle-Matern random field as a model of correlated fluctuations

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    This paper considers a generalization of the Gaussian random field with covariance function of the Whittle-Matern family. Such a random field can be obtained as the solution to the fractional stochastic differential equation with two fractional orders. Asymptotic properties of the covariance functions belonging to this generalized Whittle-Matern family are studied, which are used to deduce the sample path properties of the random field. The Whittle-Matern field has been widely used in modeling geostatistical data such as sea beam data, wind speed, field temperature and soil data. In this paper we show that the generalized Whittle-Matern field provides a more flexible model for wind speed data

  12. An approach for evaluating utility-financed energy conservation programs. The economic welfare model

    Energy Technology Data Exchange (ETDEWEB)

    Costello, K W; Galen, P S

    1985-09-01

    The main objective of this paper is to illustrate how the economic welfare model may be used to measure the economic efficiency effects of utility-financed energy conservation programs. The economic welfare model is the theoretical structure that was used in this paper to develop a cost/benefit test. This test defines the net benefit of a conservation program as the change in the sum of consumer and producer surplus. The authors advocate the operation of the proposed cost/benefit model as a screening tool to eliminate from more detailed review those programs where the expected net benefits are less than zero. The paper presents estimates of the net benefit derived from different specified cost/benefit models for four illustrative pilot programs. These models are representative of those which have been applied or are under review by utilities and public utility commissions. From the numerical results, it is shown that net benefit is greatly affected by the assumptions made about the nature of welfare gains to program participants. The main conclusion that emerges from the numerical results is that the selection of a cost/benefit model is a crucial element in evaluating utility-financed energy conservation programs. The paper also briefly addresses some of the major unresolved issues in utility-financed energy conservation programs. 2 figs., 3 tabs., 10 refs. (A.V.)

  13. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  14. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  15. Pervasive randomness in physics: an introduction to its modelling and spectral characterisation

    Science.gov (United States)

    Howard, Roy

    2017-10-01

    An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.

  16. Estimation of cost-effectiveness of the Finnish electricity distribution utilities

    International Nuclear Information System (INIS)

    Kopsakangas-Savolainen, Maria; Svento, Rauli

    2008-01-01

    This paper examines the cost-effectiveness of Finnish electricity distribution utilities. We estimate several panel data stochastic frontier specifications using both Cobb-Douglas and Translog model specifications. The conventional models are extended in order to model observed heterogeneity explicitly in the cost frontier models. The true fixed effects model has been used as a representative of the models which account for unobserved heterogeneity and extended conventional random effect models have been used in analysing the impact of observed heterogeneity. A comparison between the conventional random effects model and models where heterogeneity component is entered either into the mean or into the variance of the inefficiency term shows that relative efficiency scores diminish when heterogeneity is added to the analysis. The true fixed effects model on the other hand gives clearly smaller inefficiency scores than random effects models. In the paper we also show that the relative inefficiency scores and rankings are not sensitive to the cost function specification. Our analysis points out the importance of the efficient use of the existing distribution network. The economies of scale results suggest that firms could reduce their operating costs by using networks more efficiently. According to our results average size firms which have high load factors are the most efficient ones. All firms have unused capacities so that they can improve cost-effectiveness rather by increasing the average distributed volumes than by mergers

  17. Estimation of cost-effectiveness of the Finnish electricity distribution utilities

    Energy Technology Data Exchange (ETDEWEB)

    Kopsakangas-Savolainen, Maria; Svento, Rauli [Department of Economics, University of Oulu (Finland)

    2008-03-15

    This paper examines the cost-effectiveness of Finnish electricity distribution utilities. We estimate several panel data stochastic frontier specifications using both Cobb-Douglas and Translog model specifications. The conventional models are extended in order to model observed heterogeneity explicitly in the cost frontier models. The true fixed effects model has been used as a representative of the models which account for unobserved heterogeneity and extended conventional random effect models have been used in analysing the impact of observed heterogeneity. A comparison between the conventional random effects model and models where heterogeneity component is entered either into the mean or into the variance of the inefficiency term shows that relative efficiency scores diminish when heterogeneity is added to the analysis. The true fixed effects model on the other hand gives clearly smaller inefficiency scores than random effects models. In the paper we also show that the relative inefficiency scores and rankings are not sensitive to the cost function specification. Our analysis points out the importance of the efficient use of the existing distribution network. The economies of scale results suggest that firms could reduce their operating costs by using networks more efficiently. According to our results average size firms which have high load factors are the most efficient ones. All firms have unused capacities so that they can improve cost-effectiveness rather by increasing the average distributed volumes than by mergers. (author)

  18. DIAMOND: A model of incremental decision making for resource acquisition by electric utilities

    Energy Technology Data Exchange (ETDEWEB)

    Gettings, M.; Hirst, E.; Yourstone, E.

    1991-02-01

    Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth; the lifetimes and performances of existing power plants; the construction times, costs, and performances of new resources being brought online; and the regulatory and economic environment in which utilities operate. This report describes a utility planning model that focuses on frequent and incremental decisions. The key features of this model are its explicit treatment of uncertainty, frequent user interaction with the model, and the ability to change prior decisions. The primary strength of this model is its representation of the planning and decision-making environment that utility planners and executives face. Users interact with the model after every year or two of simulation, which provides an opportunity to modify past decisions as well as to make new decisions. For example, construction of a power plant can be started one year, and if circumstances change, the plant can be accelerated, mothballed, canceled, or continued as originally planned. Similarly, the marketing and financial incentives for demand-side management programs can be changed from year to year, reflecting the short lead time and small unit size of these resources. This frequent user interaction with the model, an operational game, should build greater understanding and insights among utility planners about the risks associated with different types of resources. The model is called DIAMOND, Decision Impact Assessment Model. In consists of four submodels: FUTURES, FORECAST, SIMULATION, and DECISION. It runs on any IBM-compatible PC and requires no special software or hardware. 19 refs., 13 figs., 15 tabs.

  19. What’s Needed from Climate Modeling to Advance Actionable Science for Water Utilities?

    Science.gov (United States)

    Barsugli, J. J.; Anderson, C. J.; Smith, J. B.; Vogel, J. M.

    2009-12-01

    “…perfect information on climate change is neither available today nor likely to be available in the future, but … over time, as the threats climate change poses to our systems grow more real, predicting those effects with greater certainty is non-discretionary. We’re not yet at a level at which climate change projections can drive climate change adaptation.” (Testimony of WUCA Staff Chair David Behar to the House Committee on Science and Technology, May 5, 2009) To respond to this challenge, the Water Utility Climate Alliance (WUCA) has sponsored a white paper titled “Options for Improving Climate Modeling to Assist Water Utility Planning for Climate Change. ” This report concerns how investments in the science of climate change, and in particular climate modeling and downscaling, can best be directed to help make climate projections more actionable. The meaning of “model improvement” can be very different depending on whether one is talking to a climate model developer or to a water manager trying to incorporate climate projections in to planning. We first surveyed the WUCA members on present and potential uses of climate model projections and on climate inputs to their various system models. Based on those surveys and on subsequent discussions, we identified four dimensions along which improvement in modeling would make the science more “actionable”: improved model agreement on change in key parameters; narrowing the range of model projections; providing projections at spatial and temporal scales that match water utilities system models; providing projections that water utility planning horizons. With these goals in mind we developed four options for improving global-scale climate modeling and three options for improving downscaling that will be discussed. However, there does not seem to be a single investment - the proverbial “magic bullet” -- which will substantially reduce the range of model projections at the scales at which utility

  20. Numerical Simulation of Entropy Growth for a Nonlinear Evolutionary Model of Random Markets

    Directory of Open Access Journals (Sweden)

    Mahdi Keshtkar

    2016-01-01

    Full Text Available In this communication, the generalized continuous economic model for random markets is revisited. In this model for random markets, agents trade by pairs and exchange their money in a random and conservative way. They display the exponential wealth distribution as asymptotic equilibrium, independently of the effectiveness of the transactions and of the limitation of the total wealth. In the current work, entropy of mentioned model is defined and then some theorems on entropy growth of this evolutionary problem are given. Furthermore, the entropy increasing by simulation on some numerical examples is verified.

  1. Scaling of coercivity in a 3d random anisotropy model

    Energy Technology Data Exchange (ETDEWEB)

    Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.

    2015-06-15

    The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.

  2. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    Science.gov (United States)

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  3. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  4. The changing utility workforce and the emergence of building information modeling in utilities

    Energy Technology Data Exchange (ETDEWEB)

    Saunders, A. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Utilities are faced with the extensive replacement of a workforce that is now reaching retirement age. New personnel will have varying skill levels and different expectations in relation to design tools. This paper discussed methods of facilitating knowledge transfer from the retiring workforce to new staff using rules-based design software. It was argued that while nothing can replace the experiential knowledge of long-term engineers, software with built-in validations can accelerate training and building information modelling (BIM) processes. Younger personnel will expect a user interface paradigm that is based on their past gaming and work experiences. Visualization, simulation, and modelling approaches were reviewed. 3 refs.

  5. Asthma Self-Management Model: Randomized Controlled Trial

    Science.gov (United States)

    Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo

    2016-01-01

    Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…

  6. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  7. Predicting Health Care Utilization After the First Behavioral Health Visit Using Natural Language Processing and Machine Learning

    OpenAIRE

    Roysden, Nathaniel

    2016-01-01

    Mental health problems are an independent predictor of increased healthcare utilization. We created random forest classifiers for predicting two outcomes following a patient’s first behavioral health encounter: decreased utilization by any amount (AUROC 0.74) and ultra-high absolute utilization (AUROC 0.88). These models may be used for clinical decision support by referring providers, to automatically detect patients who may benefit from referral, for cost management, or for risk/protection ...

  8. Direct estimates of unemployment rate and capacity utilization in macroeconometric models

    Energy Technology Data Exchange (ETDEWEB)

    Klein, L R [Univ. of Pennsylvania, Philadelphia; Su, V

    1979-10-01

    The problem of measuring resource-capacity utilization as a factor in overall economic efficiency is examined, and a tentative solution is offered. A macro-econometric model is applied to the aggregate production function by linking unemployment rate and capacity utilization rate. Partial- and full-model simulations use Wharton indices as a filter and produce direct estimates of unemployment rates. The simulation paths of durable-goods industries, which are more capital-intensive, are found to be more sensitive to business cycles than the nondurable-goods industries. 11 references.

  9. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    Science.gov (United States)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D equality at all studied dimensions.

  10. Simulating intrafraction prostate motion with a random walk model

    Directory of Open Access Journals (Sweden)

    Tobias Pommer, PhD

    2017-07-01

    Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.

  11. Random matrices and the six-vertex model

    CERN Document Server

    Bleher, Pavel

    2013-01-01

    This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...

  12. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  13. Review of utility values for economic modeling in type 2 diabetes.

    Science.gov (United States)

    Beaudet, Amélie; Clegg, John; Thuresson, Per-Olof; Lloyd, Adam; McEwan, Phil

    2014-06-01

    Economic analysis in type 2 diabetes mellitus (T2DM) requires an assessment of the effect of a wide range of complications. The objective of this article was to identify a set of utility values consistent with the National Institute for Health and Care Excellence (NICE) reference case and to critically discuss and illustrate challenges in creating such a utility set. A systematic literature review was conducted to identify studies reporting utility values for relevant complications. The methodology of each study was assessed for consistency with the NICE reference case. A suggested set of utility values applicable to modeling was derived, giving preference to studies reporting multiple complications and correcting for comorbidity. The review considered 21 relevant diabetes complications. A total of 16,574 articles were identified; after screening, 61 articles were assessed for methodological quality. Nineteen articles met NICE criteria, reporting utility values for 20 of 21 relevant complications. For renal transplant, because no articles meeting NICE criteria were identified, two articles using other methodologies were included. Index value estimates for T2DM without complication ranged from 0.711 to 0.940. Utility decrement associated with complications ranged from 0.014 (minor hypoglycemia) to 0.28 (amputation). Limitations associated with the selection of a utility value for use in economic modeling included variability in patient recruitment, heterogeneity in statistical analysis, large variability around some point estimates, and lack of recent data. A reference set of utility values for T2DM and its complications in line with NICE requirements was identified. This research illustrates the challenges associated with systematically selecting utility data for economic evaluations. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Utility of the Conners' Adult ADHD Rating Scale validity scales in identifying simulated attention-deficit hyperactivity disorder and random responding.

    Science.gov (United States)

    Walls, Brittany D; Wallace, Elizabeth R; Brothers, Stacey L; Berry, David T R

    2017-12-01

    Recent concern about malingered self-report of symptoms of attention-deficit hyperactivity disorder (ADHD) in college students has resulted in an urgent need for scales that can detect feigning of this disorder. The present study provided further validation data for a recently developed validity scale for the Conners' Adult ADHD Rating Scale (CAARS), the CAARS Infrequency Index (CII), as well as for the Inconsistency Index (INC). The sample included 139 undergraduate students: 21 individuals with diagnoses of ADHD, 29 individuals responding honestly, 54 individuals responding randomly (full or half), and 35 individuals instructed to feign. Overall, the INC showed moderate sensitivity to random responding (.44-.63) and fairly high specificity to ADHD (.86-.91). The CII demonstrated modest sensitivity to feigning (.31-.46) and excellent specificity to ADHD (.91-.95). Sequential application of validity scales had correct classification rates of honest (93.1%), ADHD (81.0%), feigning (57.1%), half random (42.3%), and full random (92.9%). The present study suggests that the CII is modestly sensitive (true positive rate) to feigned ADHD symptoms, and highly specific (true negative rate) to ADHD. Additionally, this study highlights the utility of applying the CAARS validity scales in a sequential manner for identifying feigning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    Energy Technology Data Exchange (ETDEWEB)

    Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  16. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    Science.gov (United States)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  17. Development of the multi-attribute Adolescent Health Utility Measure (AHUM

    Directory of Open Access Journals (Sweden)

    Beusterien Kathleen M

    2012-08-01

    Full Text Available Abstract Objective Obtain utilities (preferences for a generalizable set of health states experienced by older children and adolescents who receive therapy for chronic health conditions. Methods A health state classification system, the Adolescent Health Utility Measure (AHUM, was developed based on generic health status measures and input from children with Hunter syndrome and their caregivers. The AHUM contains six dimensions with 4–7 severity levels: self-care, pain, mobility, strenuous activities, self-image, and health perceptions. Using the time trade off (TTO approach, a UK population sample provided utilities for 62 of 16,800 AHUM states. A mixed effects model was used to estimate utilities for the AHUM states. The AHUM was applied to trial NCT00069641 of idursulfase for Hunter syndrome and its extension (NCT00630747. Results Observations (i.e., utilities totaled 3,744 (12*312 participants, with between 43 to 60 for each health state except for the best and worst states which had 312 observations. The mean utilities for the best and worst AHUM states were 0.99 and 0.41, respectively. The random effects model was statistically significant (p  Discussion The AHUM health state classification system may be used in future research to enable calculation of quality-adjust life expectancy for applicable health conditions.

  18. Economics of Utility Scale Photovoltaics at Purdue University

    Science.gov (United States)

    Arnett, William

    The research for this case study shows that utility scale solar photovoltaics has become a competitive energy investment option, even when a campus operates a power plant at low electricity rates. To evaluate this an economic model called SEEMS (Solar Economic Evaluation Modelling Spreadsheets) was developed to evaluate a number of financial scenarios in Real Time Pricing for universities. The three main financing structures considered are 1) land leasing, 2) university direct purchase, and 3) third party purchase. Unlike other commercially available models SEEMS specifically accounts for real time pricing, where the local utility provides electricity at an hourly rate that changes with the expected demand. In addition, SEEMS also includes a random simulation that allows the model to predict the likelihood of success for a given solar installation strategy. The research showed that there are several options for utility scale solar that are financially attractive. The most practical financing structure is with a third party partnership because of the opportunity to take advantage of tax incentives. Other options could become more attractive if non-financial benefits are considered. The case study for this research, Purdue University, has a unique opportunity to integrate utility-scale solar electricity into its strategic planning. Currently Purdue is updating its master plan which will define how land is developed. Purdue is also developing a sustainability plan that will define long term environmental goals. In addition, the university is developing over 500 acres of land west of campus as part of its Aerospace Innovation District. This research helps make the case for including utility-scale solar electricity as part of the university's strategic planning.

  19. Universality for 1d Random Band Matrices: Sigma-Model Approximation

    Science.gov (United States)

    Shcherbina, Mariya; Shcherbina, Tatyana

    2018-02-01

    The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.

  20. Clinical Utility of the DSM-5 Alternative Model of Personality Disorders

    DEFF Research Database (Denmark)

    Bach, Bo; Markon, Kristian; Simonsen, Erik

    2015-01-01

    In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... instruments assess level of personality impairment and pathological traits. Objective. A number of studies have supported the psychometric qualities of the LPFS and the PID-5, but the utility of these instruments in clinical assessment and treatment has not been extensively evaluated. The goal of this study...... was to evaluate the clinical utility of this alternative model of personality disorders. Method. We administered the LPFS and the PID-5 to psychiatric outpatients diagnosed with personality disorders and other nonpsychotic disorders. The personality profiles of six characteristic patients were inspected...

  1. Exploring the Influence of Neighborhood Characteristics on Burglary Risks: A Bayesian Random Effects Modeling Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2016-06-01

    Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.

  2. Predictive Utility of Personality Disorder in Depression: Comparison of Outcomes and Taxonomic Approach.

    Science.gov (United States)

    Newton-Howes, Giles; Mulder, Roger; Ellis, Pete M; Boden, Joseph M; Joyce, Peter

    2017-09-19

    There is debate around the best model for diagnosing personality disorder, both in terms of its relationship to the empirical data and clinical utility. Four randomized controlled trials examining various treatments for depression were analyzed at an individual patient level. Three different approaches to the diagnosis of personality disorder were analyzed in these patients. A total of 578 depressed patients were included in the analysis. Personality disorder, however measured, was of little predictive utility in the short term but added significantly to predictive modelling of medium-term outcomes, accounting for more than twice as much of the variance in social functioning outcome as depression psychopathology. Personality disorder assessment is of predictive utility with longer timeframes and when considering social outcomes as opposed to symptom counts. This utility is sufficiently great that there appears to be value in assessing personality; however, no particular approach outperforms any other.

  3. Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price

    Science.gov (United States)

    Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.

    2018-02-01

    Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.

  4. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  5. Assessment of the biophysical impacts of utility-scale photovoltaics through observations and modelling

    Science.gov (United States)

    Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.

    2017-12-01

    Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.

  6. A lattice-model representation of continuous-time random walks

    International Nuclear Information System (INIS)

    Campos, Daniel; Mendez, Vicenc

    2008-01-01

    We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied

  7. A lattice-model representation of continuous-time random walks

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Daniel [School of Mathematics, Department of Applied Mathematics, University of Manchester, Manchester M60 1QD (United Kingdom); Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra (Barcelona) (Spain)], E-mail: daniel.campos@uab.es, E-mail: vicenc.mendez@uab.es

    2008-02-29

    We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied.

  8. Self-dual random-plaquette gauge model and the quantum toric code

    Science.gov (United States)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-05-01

    We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.

  9. Self-dual random-plaquette gauge model and the quantum toric code

    International Nuclear Information System (INIS)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-01-01

    We study the four-dimensional Z 2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, p c =0.889972..., and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions

  10. Improving surgeon utilization in an orthopedic department using simulation modeling

    Directory of Open Access Journals (Sweden)

    Simwita YW

    2016-10-01

    Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process

  11. Model franchise agreements with public utilities. Musterkonzessionsvertraege mit Energieversorgungsunternehmen

    Energy Technology Data Exchange (ETDEWEB)

    Menking, C. (Niedersaechsischer Staedte- und Gemeindebund, Hannover (Germany, F.R.))

    1989-01-01

    In 1987, the Committee of Town and Community Administrations of Lower Saxonia established the task force 'Franchise Agreements'. This is a forum where town and community officials interested in energy issues cooperate. The idea was to improve conditions and participation possibilities for local administrations in contracts with their present utilities, and to draw up, and coordinate with the utilities, a franchise agreement creating possibilities for the communities, inter alia, in the sectors power supply concept, advising on energy conservation, energy generation. A model of a franchise agreement for the electricity sector is presented in its full wording. (orig./HSCH).

  12. The application of financial options theory to electric utility decision making in integrated resource planning and maintenance shutdowns

    International Nuclear Information System (INIS)

    Felder, F.

    1995-01-01

    Increased competition in wholesale power generation will allow electric utilities to use financial models to improve their decision making. This competition will result in the creation of electricity spot, futures, and forward markets, which will provide necessary information for utility executives to used advance financial tools, such as random walk models and options theory. These models will allow executives to place a value on risk. Once this value is known, executives can determine how best to manage that risk, whether by entering into financial transactions, adjusting their operational and planning decisions, or both

  13. Modeling superhydrophobic surfaces comprised of random roughness

    Science.gov (United States)

    Samaha, M. A.; Tafreshi, H. Vahedi; Gad-El-Hak, M.

    2011-11-01

    We model the performance of superhydrophobic surfaces comprised of randomly distributed roughness that resembles natural surfaces, or those produced via random deposition of hydrophobic particles. Such a fabrication method is far less expensive than ordered-microstructured fabrication. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridge configurations for pipe flows. The present results are compared with other theoretical and experimental studies. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. Financial support from DARPA, contract number W91CRB-10-1-0003, is acknowledged.

  14. The transverse spin-1 Ising model with random interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)

    2009-01-15

    The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.

  15. Local lattice relaxations in random metallic alloys: Effective tetrahedron model and supercell approach

    DEFF Research Database (Denmark)

    Ruban, Andrei; Simak, S.I.; Shallcross, S.

    2003-01-01

    We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri......-rich CuAu alloys shows that the model yields a quantitatively accurate description of the relaxtion energies in these systems. Finally, we discuss the bond length distribution in random alloys....

  16. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    Science.gov (United States)

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  17. Discrete random walk models for space-time fractional diffusion

    International Nuclear Information System (INIS)

    Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo

    2002-01-01

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation

  18. Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Leppaenen, J.

    2007-01-01

    High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)

  19. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...

  20. A spatial error model with continuous random effects and an application to growth convergence

    Science.gov (United States)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  1. Implications of Model Structure and Detail for Utility Planning: Scenario Case Studies Using the Resource Planning Model

    Energy Technology Data Exchange (ETDEWEB)

    Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-04-01

    In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.

  2. Random vibration sensitivity studies of modeling uncertainties in the NIF structures

    International Nuclear Information System (INIS)

    Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.

    1996-01-01

    The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m 2 at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 microm ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10 -10 g 2 /Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 microrad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed

  3. Cost-utility model of rasagiline in the treatment of advanced Parkinson's disease in Finland.

    Science.gov (United States)

    Hudry, Joumana; Rinne, Juha O; Keränen, Tapani; Eckert, Laurent; Cochran, John M

    2006-04-01

    The economic burden of Parkinson's disease (PD) is high, especially in patients experiencing motor fluctuations. Rasagiline has demonstrated efficacy against symptoms of PD in early and advanced stages of the disease. To assess the cost-utility of rasagiline and entacapone as adjunctive therapies to levodopa versus standard levodopa care in PD patients with motor fluctuations in Finland. A 2 year probabilistic Markov model with 3 health states: "25% or less off-time/day," "greater than 25% off-time/day," and "dead" was used. Off-time represents time awake with poor or absent motor function. Model inputs included transition probabilities from randomized clinical trials, utilities from a preference measurement study, and costs and resources from a Finnish cost-of-illness study. Effectiveness measures were quality-adjusted life years (QALYs) and number of months spent with 25% or less off-time/day. Uncertainty around parameters was taken into account by Monte Carlo simulations. Over 2 years from a societal perspective, rasagiline or entacapone as adjunctive therapies to levodopa showed greater effectiveness than levodopa alone at no additional costs. Benefits after 2 years were 0.13 (95% CI 0.08 to 0.17) additional QALYs and 5.2 (3.6 to 6.7) additional months for rasagiline and 0.12 (0.08 to 0.17) QALYs and 5.1 (3.5 to 6.6) months for entacapone, both in adjunct to levodopa compared with levodopa alone. The results of this study support the use of rasagiline and entacapone as adjunctive cost-effective alternatives to levodopa alone in PD patients with motor fluctuations in Finland. With a different mode of action, rasagiline is a valuable therapeutic alternative to entacapone at no additional charge to society.

  4. A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models

    Energy Technology Data Exchange (ETDEWEB)

    Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-06-01

    In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.

  5. Investigating Facebook Groups through a Random Graph Model

    OpenAIRE

    Dinithi Pallegedara; Lei Pan

    2014-01-01

    Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...

  6. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, S; Lange, T

    2013-01-01

    The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...

  7. Bridging Weighted Rules and Graph Random Walks for Statistical Relational Models

    Directory of Open Access Journals (Sweden)

    Seyed Mehran Kazemi

    2018-02-01

    Full Text Available The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this article, we study the relationship between the path ranking algorithm (PRA, one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR, one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.

  8. The ising model on the dynamical triangulated random surface

    International Nuclear Information System (INIS)

    Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.

    1990-01-01

    The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions

  9. Modelling of limestone injection for SO2 capture in a coal fired utility boiler

    International Nuclear Information System (INIS)

    Kovacik, G.J.; Reid, K.; McDonald, M.M.; Knill, K.

    1997-01-01

    A computer model was developed for simulating furnace sorbent injection for SO 2 capture in a full scale utility boiler using TASCFlow TM computational fluid dynamics (CFD) software. The model makes use of a computational grid of the superheater section of a tangentially fired utility boiler. The computer simulations are three dimensional so that the temperature and residence time distribution in the boiler could be realistically represented. Results of calculations of simulated sulphur capture performance of limestone injection in a typical utility boiler operation were presented

  10. Social aggregation in pea aphids: experiment and random walk modeling.

    Directory of Open Access Journals (Sweden)

    Christa Nilsen

    Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  11. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  12. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  13. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  14. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  16. A note on identification in discrete choice models with partial observability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Ranjan, Abhishek

    2017-01-01

    This note establishes a new identification result for additive random utility discrete choice models. A decision-maker associates a random utility Uj+ mj to each alternative in a finite set j∈ {1 , … , J} , where U= {U1, … , UJ} is unobserved by the researcher and random with an unknown joint dis...... for applications where choices are observed aggregated into groups while prices and attributes vary at the level of individual alternatives....

  17. Effects of a randomized controlled intervention trial on return to work and health care utilization after long-term sickness absence

    DEFF Research Database (Denmark)

    Momsen, Anne-Mette H.; Stapelfeldt, Christina Malmose; Nielsen, Claus Vinther

    2016-01-01

    ) and odds ratio (OR) were used as measures of associations. Results were adjusted for gender, age, educational level, work ability and previous sick leave. Results: Among all responders we found no effect of the intervention on RTW. Among participants with low health anxiety, the one-year probability of RTW......Background: The aim of the RCT study was to investigate if the effect of a multidisciplinary intervention on return to work (RTW) and health care utilization differed by participants’ self-reported health status at baseline, defined by a) level of somatic symptoms, b) health anxiety and c) self......-reported general health. Methods: A total of 443 individuals were randomized to the intervention (n = 301) or the control group (n = 142) and responded to a questionnaire measuring health status at baseline. Participants were followed in registries measuring RTW and health care utilization. Relative risk (RR...

  18. On a Stochastic Failure Model under Random Shocks

    Science.gov (United States)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  19. Random Walk Model for Cell-To-Cell Misalignments in Accelerator Structures

    International Nuclear Information System (INIS)

    Stupakov, Gennady

    2000-01-01

    Due to manufacturing and construction errors, cells in accelerator structures can be misaligned relative to each other. As a consequence, the beam generates a transverse wakefield even when it passes through the structure on axis. The most important effect is the long-range transverse wakefield that deflects the bunches and causes growth of the bunch train projected emittance. In this paper, the effect of the cell-to-cell misalignments is evaluated using a random walk model that assumes that each cell is shifted by a random step relative to the previous one. The model is compared with measurements of a few accelerator structures

  20. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model

    Science.gov (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi

    2017-09-01

    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  1. Microwave single-scattering properties of randomly oriented soft-ice hydrometeors

    Directory of Open Access Journals (Sweden)

    D. Casella

    2008-11-01

    Full Text Available Large ice hydrometeors are usually present in intense convective clouds and may significantly affect the upwelling radiances that are measured by satellite-borne microwave radiometers – especially, at millimeter-wavelength frequencies. Thus, interpretation of these measurements (e.g., for precipitation retrieval requires knowledge of the single scattering properties of ice particles. On the other hand, shape and internal structure of these particles (especially, the larger ones is very complex and variable, and therefore it is necessary to resort to simplifying assumptions in order to compute their single-scattering parameters.

    In this study, we use the discrete dipole approximation (DDA to compute the absorption and scattering efficiencies and the asymmetry factor of two kinds of quasi-spherical and non-homogeneous soft-ice particles in the frequency range 50–183 GHz. Particles of the first kind are modeled as quasi-spherical ice particles having randomly distributed spherical air inclusions. Particles of the second kind are modeled as random aggregates of ice spheres having random radii. In both cases, particle densities and dimensions are coherent with the snow hydrometeor category that is utilized by the University of Wisconsin – Non-hydrostatic Modeling System (UW-NMS cloud-mesoscale model. Then, we compare our single-scattering results for randomly-oriented soft-ice hydrometeors with corresponding ones that make use of: a effective-medium equivalent spheres, b solid-ice equivalent spheres, and c randomly-oriented aggregates of ice cylinders. Finally, we extend to our particles the scattering formulas that have been developed by other authors for randomly-oriented aggregates of ice cylinders.

  2. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  3. Maximizing the model for Discounted Stream of Utility from ...

    African Journals Online (AJOL)

    Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...

  4. Economic Valuation on Change of Tourism Quality in Rawapening, Indonesia: An Application of Random Utility Method

    Science.gov (United States)

    Subanti, S.; Irawan, B. R. M. B.; Sasongko, G.; Hakim, A. R.

    2017-04-01

    This study aims to determine the profit (loss) earned economic actors tourism activities if the condition or quality of tourism in Rawapening be improved (deteriorated). Change condition or quality can be seen by traveling expenses, natural environment, Japanese cultural performances, and traditional markets. The method used to measure changes in the economic benefits or economic loss with a random utility approach. The study was found that travel cost, natural environment, Japanese cultural performances, and traditional markets have significant factors about respondent preferences to choose the change of tourism condition. The value of compensation received by visitors as a result of changes in conditions improved by 2,932 billion, while the change in the condition worsens by 2,628 billion. Recommendation of this study is the local government should consider environmental factors in the formulation of tourism development in Rawapening.

  5. Extension of the behavioral model of healthcare utilization with ethnically diverse, low-income women.

    Science.gov (United States)

    Keenan, Lisa A; Marshall, Linda L; Eve, Susan

    2002-01-01

    Psychosocial vulnerabilities were added to a model of healthcare utilization. This extension was tested among low-income women with ethnicity addressed as a moderator. Structured interviews were conducted at 2 points in time, approximately 1 year apart. The constructs of psychosocial vulnerability, demographic predisposing, barriers, and illness were measured by multiple indicators to allow use of Structural Equation Modeling to analyze results. The models were tested separately for each ethnic group. Community office. African-American (N = 266), Euro-American (N = 200), and Mexican-American (N = 210) women were recruited from the Dallas Metropolitan area to participate in Project Health Outcomes of Women, a multi-year, multi-wave study. Face-to-face interviews were conducted with this sample. Participants had been in heterosexual relationships for at least 1 year, were between 20 and 49 years of age, and had incomes less than 200% of the national poverty level. Healthcare utilization, defined as physician visits and general healthcare visits. Illness mediated the effect of psychosocial vulnerability on healthcare utilization for African Americans and Euro-Americans. The model for Mexican Americans was the most complex. Psychosocial vulnerability on illness was partially mediated by barriers, which also directly affected utilization. Psychosocial vulnerabilities were significant utilization predictors for healthcare use for all low-income women in this study. The final models for the 2 minority groups, African Americans and Mexican Americans, were quite different. Hence, women of color should not be considered a homogeneous group in comparison to Euro-Americans.

  6. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    Science.gov (United States)

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  7. Application of the resonating Hartree-Fock random phase approximation to the Lipkin model

    International Nuclear Information System (INIS)

    Nishiyama, S.; Ishida, K.; Ido, M.

    1996-01-01

    We have applied the resonating Hartree-Fock (Res-HF) approximation to the exactly solvable Lipkin model by utilizing a newly developed orbital-optimization algorithm. The Res-HF wave function was superposed by two Slater determinants (S-dets) which give two corresponding local energy minima of monopole ''deformations''. The self-consistent Res-HF calculation gives an excellent ground-state correlation energy. There exist excitations due to small vibrational fluctuations of the orbitals and mixing coefficients around their stationary values. They are described by a new approximation called the resonating Hartree-Fock random phase approximation (Res-HF RPA). Matrices of the second-order variation of the Res-HF energy have the same structures as those of the Res-HF RPA's matrices. The quadratic steepest descent of the Res-HF energy in the orbital optimization is considered to include certainly both effects of RPA-type fluctuations up to higher orders and their mode-mode couplings. It is a very important and interesting task to apply the Res-HF RPA to the Lipkin model with the use of the stationary values and to prove the above argument. It turns out that the Res-HF RPA works far better than the usual HF RPA and the renormalized one. We also show some important features of the Res-HF RPA. (orig.)

  8. BOX-COX transformation and random regression models for fecal egg count data

    Directory of Open Access Journals (Sweden)

    Marcos Vinicius Silva

    2012-01-01

    Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  9. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    Science.gov (United States)

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  10. Random cyclic constitutive models of 0Cr18Ni10Ti pipe steel

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Yang Bing

    2004-01-01

    Experimental study is performed on the random cyclic constitutive relations of a new pipe stainless steel, 0Cr18Ni10Ti, by an incremental strain-controlled fatigue test. In the test, it is verified that the random cyclic constitutive relations, like the wide recognized random cyclic strain-life relations, is an intrinsic fatigue phenomenon of engineering materials. Extrapolating the previous work by Zhao et al, probability-based constitutive models are constructed, respectively, on the bases of Ramberg-Osgood equation and its modified form. Scattering regularity and amount of the test data are taken into account. The models consist of the survival probability-strain-life curves, the confidence strain-life curves, and the survival probability-confidence-strain-life curves. Availability and feasibility of the models have been indicated by analysis of the present test data

  11. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  12. Prevention of low back pain: effect, cost-effectiveness, and cost-utility of maintenance care - study protocol for a randomized clinical trial

    DEFF Research Database (Denmark)

    Eklund, Andreas; Axén, Iben; Kongsted, Alice

    2014-01-01

    is the number of days with bothersome pain over 12 months. Secondary measures are self-rated health (EQ-5D), function (the Roland Morris Disability Questionnaire), psychological profile (the Multidimensional Pain Inventory), pain intensity (the Numeric Rating Scale), and work absence.The primary utility measure...... of the study is quality-adjusted life years and will be calculated using the EQ-5D questionnaire. Direct medical costs as well as indirect costs will be considered.Subjects are randomly allocated into two treatment arms: 1) Symptom-guided treatment (patient controlled), receiving care when patients feel a need....... Strict inclusion criteria should ensure a suitable target group and the use of frequent data collection should provide an accurate outcome measurement. The study utilizes normal clinical procedures, which should aid the transferability of the results.Trial registration: Clinical trials.gov; NCT01539863...

  13. A random matrix model of relaxation

    International Nuclear Information System (INIS)

    Lebowitz, J L; Pastur, L

    2004-01-01

    We consider a two-level system, S 2 , coupled to a general n level system, S n , via a random matrix. We derive an integral representation for the mean reduced density matrix ρ(t) of S 2 in the limit n → ∞, and we identify a model of S n which possesses some of the properties expected for macroscopic thermal reservoirs. In particular, it yields the Gibbs form for ρ(∞). We also consider an analog of the van Hove limit and obtain a master equation (Markov dynamics) for the evolution of ρ(t) on an appropriate time scale

  14. Factorisations for partition functions of random Hermitian matrix models

    International Nuclear Information System (INIS)

    Jackson, D.M.; Visentin, T.I.

    1996-01-01

    The partition function Z N , for Hermitian-complex matrix models can be expressed as an explicit integral over R N , where N is a positive integer. Such an integral also occurs in connection with random surfaces and models of two dimensional quantum gravity. We show that Z N can be expressed as the product of two partition functions, evaluated at translated arguments, for another model, giving an explicit connection between the two models. We also give an alternative computation of the partition function for the φ 4 -model.The approach is an algebraic one and holds for the functions regarded as formal power series in the appropriate ring. (orig.)

  15. Randomizing growing networks with a time-respecting null model

    Science.gov (United States)

    Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš

    2018-05-01

    Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.

  16. Random regression models for daily feed intake in Danish Duroc pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just

    The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test station......-year-season, permanent, and animal genetic effects. The functional form was based on Legendre polynomials. A total of 64 models for random regressions were initially ranked by BIC to identify the approximate order for the Legendre polynomials using AI-REML. The parsimonious model included Legendre polynomials of 2nd...... order for genetic and permanent environmental curves and a heterogeneous residual variance, allowing the daily residual variance to change along the age trajectory due to scale effects. The parameters of the model were estimated in a Bayesian framework, using the RJMC module of the DMU package, where...

  17. The utilization of cranial models created using rapid prototyping techniques in the development of models for navigation training.

    Science.gov (United States)

    Waran, V; Pancharatnam, Devaraj; Thambinayagam, Hari Chandran; Raman, Rajagopal; Rathinam, Alwin Kumar; Balakrishnan, Yuwaraj Kumar; Tung, Tan Su; Rahman, Z A

    2014-01-01

    Navigation in neurosurgery has expanded rapidly; however, suitable models to train end users to use the myriad software and hardware that come with these systems are lacking. Utilizing three-dimensional (3D) industrial rapid prototyping processes, we have been able to create models using actual computed tomography (CT) data from patients with pathology and use these models to simulate a variety of commonly performed neurosurgical procedures with navigation systems. To assess the possibility of utilizing models created from CT scan dataset obtained from patients with cranial pathology to simulate common neurosurgical procedures using navigation systems. Three patients with pathology were selected (hydrocephalus, right frontal cortical lesion, and midline clival meningioma). CT scan data following an image-guidance surgery protocol in DIACOM format and a Rapid Prototyping Machine were taken to create the necessary printed model with the corresponding pathology embedded. The ability in registration, planning, and navigation of two navigation systems using a variety of software and hardware provided by these platforms was assessed. We were able to register all models accurately using both navigation systems and perform the necessary simulations as planned. Models with pathology utilizing 3D rapid prototyping techniques accurately reflect data of actual patients and can be used in the simulation of neurosurgical operations using navigation systems. Georg Thieme Verlag KG Stuttgart · New York.

  18. An integrated utility-based model of conflict evaluation and resolution in the Stroop task.

    Science.gov (United States)

    Chuderski, Adam; Smolen, Tomasz

    2016-04-01

    Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).

  19. Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising

    Directory of Open Access Journals (Sweden)

    Chien-ming Chou

    2014-01-01

    Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.

  20. The Effect of Utilizing Organizational Culture Improvement Model of Patient Education on Coronary Artery Bypass Graft Patients' Anxiety and Satisfaction: Theory Testing.

    Science.gov (United States)

    Farahani, Mansoureh Ashghali; Ghaffari, Fatemeh; Norouzinezhad, Faezeh; Orak, Roohangiz Jamshidi

    2016-11-01

    Due to the increasing prevalence of arteriosclerosis and the mortality caused by this disease, Coronary Artery Bypass Graft (CABG) has become one of the most common surgical procedures. Utilization of patient education is approved as an effective solution for increasing patient survival and outcomes of treatment. However, failure to consider different aspects of patient education has turned this goal into an unattainable one. The objective of this research was to determine the effect of utilizing the organizational culture improvement model of patient education on CABG patients' anxiety and satisfaction. The present study is a randomized controlled trial. This study was conducted on eighty CABG patients. The patients were selected from the CCU and Post-CCU wards of a hospital affiliated with Iran University of Medical Sciences in Tehran, Iran, during 2015. Eshpel Burger's Anxiety Inventory and Patients' Satisfaction Questionnaire were used to collect the required information. Levels of anxiety and satisfaction of patients before intervention and at the time of release were measured. The intervention took place after preparing a programmed package based on the organizational culture improvement model for the following dimensions: effective communication, participatory decision-making, goal setting, planning, implementation and recording, supervision and control, and improvement of motivation. After recording the data, it was analyzed in the chi-square test, t-independent and Mann-Whitney U tests. The significance level of tests was assumed to be 0.05. SPSS version 18 was also utilized for data analysis. Research results revealed that variations in the mean scores of situational and personality anxiety of the control and experiment group were descending following the intervention, but the decrease was higher in the experiment group (p≤0.0001). In addition, the variations of the mean scores of patients' satisfaction with education were higher in the experiment group

  1. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  2. Random unitary evolution model of quantum Darwinism with pure decoherence

    Science.gov (United States)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  3. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  4. A randomized controlled trial of support group intervention after breast cancer treatment: results on sick leave, health care utilization and health economy.

    Science.gov (United States)

    Björneklett, Helena Granstam; Rosenblad, Andreas; Lindemalm, Christina; Ojutkangas, Marja-Leena; Letocha, Henry; Strang, Peter; Bergkvist, Leif

    2013-01-01

    More than 50% of breast cancer patients are diagnosed before the age of 65. Returning to work after treatment is, therefore, of interest for both the individual and society. The aim was to study the effect of support group intervention on sick leave and health care utilization in economic terms. Of 382 patients with newly diagnosed breast cancer, 191 + 191 patients were randomized to an intervention group or to a routine control group, respectively. The intervention group received support intervention on a residential basis for one week, followed by four days of follow-up two months later. The support intervention included informative-educational sections, relaxation training, mental visualization and non-verbal communication. Patients answered a questionnaire at baseline, two, six and 12 months about sick leave and health care utilization. There was a trend towards longer sick leave and more health care utilization in the intervention group. The difference in total costs was statistically significantly higher in the intervention group after 12 months (p = 0.0036). Costs to society were not reduced with intervention in its present form.

  5. (Non-) Gibbsianness and Phase Transitions in Random Lattice Spin Models

    NARCIS (Netherlands)

    Külske, C.

    1999-01-01

    We consider disordered lattice spin models with finite-volume Gibbs measures µΛ[η](dσ). Here σ denotes a lattice spin variable and η a lattice random variable with product distribution P describing the quenched disorder of the model. We ask: when will the joint measures limΛ↑Zd P(dη)µΛ[η](dσ) be

  6. An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model

    Directory of Open Access Journals (Sweden)

    G. Rozinaj

    2004-12-01

    Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.

  7. Model of Random Polygon Particles for Concrete and Mesh Automatic Subdivision

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to study the constitutive behavior of concrete in mesoscopic level, a new method is proposed in this paper. This method uses random polygon particles to simulate full grading broken aggregates of concrete. Based on computational geometry, we carry out the automatic generation of the triangle finite element mesh for the model of random polygon particles of concrete. The finite element mesh generated in this paper is also applicable to many other numerical methods.

  8. Covariance of random stock prices in the Stochastic Dividend Discount Model

    OpenAIRE

    Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico

    2016-01-01

    Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.

  9. Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.

    Science.gov (United States)

    Duffull, Stephen B; Hooker, Andrew C

    2017-12-01

    Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.

  10. Applicability of special quasi-random structure models in thermodynamic calculations using semi-empirical Debye–Grüneisen theory

    International Nuclear Information System (INIS)

    Kim, Jiwoong

    2015-01-01

    In theoretical calculations, expressing the random distribution of atoms in a certain crystal structure is still challenging. The special quasi-random structure (SQS) model is effective for depicting such random distributions. The SQS model has not been applied to semi-empirical thermodynamic calculations; however, Debye–Grüneisen theory (DGT), a semi-empirical method, was used here for that purpose. The model reliability was obtained by comparing supercell models of various sizes. The results for chemical bonds, pair correlation, and elastic properties demonstrated the reliability of the SQS models. Thermodynamic calculations using density functional perturbation theory (DFPT) and DGT assessed the applicability of the SQS models. DGT and DFPT led to similar variations of the mixing and formation energies. This study provides guidelines for theoretical assessments to obtain the reliable SQS models and to calculate the thermodynamic properties of numerous materials with a random atomic distribution. - Highlights: • Various material properties are used to examine reliability of special quasi-random structures. • SQS models are applied to thermodynamic calculations by semi-empirical methods. • Basic calculation guidelines for materials with random atomic distribution are given.

  11. Mapping of the DLQI scores to EQ-5D utility values using ordinal logistic regression.

    Science.gov (United States)

    Ali, Faraz Mahmood; Kay, Richard; Finlay, Andrew Y; Piguet, Vincent; Kupfer, Joerg; Dalgard, Florence; Salek, M Sam

    2017-11-01

    The Dermatology Life Quality Index (DLQI) and the European Quality of Life-5 Dimension (EQ-5D) are separate measures that may be used to gather health-related quality of life (HRQoL) information from patients. The EQ-5D is a generic measure from which health utility estimates can be derived, whereas the DLQI is a specialty-specific measure to assess HRQoL. To reduce the burden of multiple measures being administered and to enable a more disease-specific calculation of health utility estimates, we explored an established mathematical technique known as ordinal logistic regression (OLR) to develop an appropriate model to map DLQI data to EQ-5D-based health utility estimates. Retrospective data from 4010 patients were randomly divided five times into two groups for the derivation and testing of the mapping model. Split-half cross-validation was utilized resulting in a total of ten ordinal logistic regression models for each of the five EQ-5D dimensions against age, sex, and all ten items of the DLQI. Using Monte Carlo simulation, predicted health utility estimates were derived and compared against those observed. This method was repeated for both OLR and a previously tested mapping methodology based on linear regression. The model was shown to be highly predictive and its repeated fitting demonstrated a stable model using OLR as well as linear regression. The mean differences between OLR-predicted health utility estimates and observed health utility estimates ranged from 0.0024 to 0.0239 across the ten modeling exercises, with an average overall difference of 0.0120 (a 1.6% underestimate, not of clinical importance). This modeling framework developed in this study will enable researchers to calculate EQ-5D health utility estimates from a specialty-specific study population, reducing patient and economic burden.

  12. Influence of organizational characteristics and context on research utilization.

    Science.gov (United States)

    Cummings, Greta G; Estabrooks, Carole A; Midodzi, William K; Wallin, Lars; Hayduk, Leslie

    2007-01-01

    Despite three decades of empirical investigation into research utilization and a renewed emphasis on evidence-based medicine and evidence-based practice in the past decade, understanding of factors influencing research uptake in nursing remains limited. There is, however, increased awareness that organizational influences are important. To develop and test a theoretical model of organizational influences that predict research utilization by nurses and to assess the influence of varying degrees of context, based on the Promoting Action on Research Implementation in Health Services (PARIHS) framework, on research utilization and other variables. The study sample was drawn from a census of registered nurses working in acute care hospitals in Alberta, Canada, accessed through their professional licensing body (n = 6,526 nurses; 52.8% response rate). Three variables that measured PARIHS dimensions of context (culture, leadership, and evaluation) were used to sort cases into one of four mutually exclusive data sets that reflected less positive to more positive context. Then, a theoretical model of hospital- and unit-level influences on research utilization was developed and tested, using structural equation modeling, and 300 cases were randomly selected from each of the four data sets. Model test results were as follows--low context: chi2= 124.5, df = 80, p low: chi2= 144.2, p high: chi2= 157.3, df = 80, p low: chi2= 146.0, df = 80, p contexts with more positive culture, leadership, and evaluation also reported significantly more research utilization, staff development, and lower rates of patient and staff adverse events than did nurses working in less positive contexts (i.e., those that lacked positive culture, leadership, or evaluation). The findings highlight the combined importance of culture, leadership, and evaluation to increase research utilization and improve patient safety. The findings may serve to strengthen the PARIHS framework and to suggest that, although

  13. P2 : A random effects model with covariates for directed graphs

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.

    A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.

  14. A binomial random sum of present value models in investment analysis

    OpenAIRE

    Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος

    1997-01-01

    Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.

  15. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon; Liang, Faming; Yuan, Ying

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we

  16. Random queues and risk averse users

    DEFF Research Database (Denmark)

    de Palma, André; Fosgerau, Mogens

    2013-01-01

    We analyze Nash equilibrium in time of use of a congested facility. Users are risk averse with general concave utility. Queues are subject to varying degrees of random sorting, ranging from strict queue priority to a completely random queue. We define the key “no residual queue” property, which...

  17. Modeling pollutant transport using a meshless-lagrangian particle model

    International Nuclear Information System (INIS)

    Carrington, D.B.; Pepper, D.W.

    2002-01-01

    A combined meshless-Lagrangian particle transport model is used to predict pollutant transport over irregular terrain. The numerical model for initializing the velocity field is based on a meshless approach utilizing multiquadrics established by Kansa. The Lagrangian particle transport technique uses a random walk procedure to depict the advection and dispersion of pollutants over any type of surface, including street and city canyons

  18. A workflow learning model to improve geovisual analytics utility.

    Science.gov (United States)

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on

  19. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    Science.gov (United States)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2018-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  20. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    International Nuclear Information System (INIS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2016-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  1. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    Energy Technology Data Exchange (ETDEWEB)

    Mishchenko, Michael I., E-mail: michael.i.mishchenko@nasa.gov [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Dlugach, Janna M. [Main Astronomical Observatory of the National Academy of Sciences of Ukraine, 27 Zabolotny Str., 03680, Kyiv (Ukraine); Yurkin, Maxim A. [Voevodsky Institute of Chemical Kinetics and Combustion, SB RAS, Institutskaya str. 3, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, Pirogova 2, 630090 Novosibirsk (Russian Federation); Bi, Lei [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Cairns, Brian [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Liu, Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Columbia University, 2880 Broadway, New York, NY 10025 (United States); Panetta, R. Lee [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Travis, Larry D. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Yang, Ping [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Zakharova, Nadezhda T. [Trinnovim LLC, 2880 Broadway, New York, NY 10025 (United States)

    2016-05-16

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  2. First-Principles Modeling Of Electromagnetic Scattering By Discrete and Discretely Heterogeneous Random Media

    Science.gov (United States)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2016-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of

  3. Utility of Small Animal Models of Developmental Programming.

    Science.gov (United States)

    Reynolds, Clare M; Vickers, Mark H

    2018-01-01

    Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.

  4. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  5. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  6. A framework for estimating health state utility values within a discrete choice experiment: modeling risky choices.

    Science.gov (United States)

    Robinson, Angela; Spencer, Anne; Moffatt, Peter

    2015-04-01

    There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.

  7. High-temperature series expansions for random Potts models

    Directory of Open Access Journals (Sweden)

    M.Hellmund

    2005-01-01

    Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.

  8. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  9. Local Stability Conditions for Two Types of Monetary Models with Recursive Utility

    OpenAIRE

    Miyazaki, Kenji; Utsunomiya, Hitoshi

    2009-01-01

    This paper explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility.A monetary variant of the Brock-Gale condition provides a theoretical justification of the comparative statics analysis. One of sufficient conditions for local stability is increasing marginal impatience (IMI) in consumption and money. However, this does not deny the possibility of decreasing marginal impatience (DMI). The local stability with DMI is mor...

  10. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2017-10-01

    Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

  11. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    Science.gov (United States)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that

  12. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    Science.gov (United States)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  13. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  14. Modeling and understanding of effects of randomness in arrays of resonant meta-atoms

    DEFF Research Database (Denmark)

    Tretyakov, Sergei A.; Albooyeh, Mohammad; Alitalo, Pekka

    2013-01-01

    In this review presentation we will discuss approaches to modeling and understanding electromagnetic properties of 2D and 3D lattices of small resonant particles (meta-atoms) in transition from regular (periodic) to random (amorphous) states. Nanostructured metasurfaces (2D) and metamaterials (3D......) are arrangements of optically small but resonant particles (meta-atoms). We will present our results on analytical modeling of metasurfaces with periodical and random arrangements of electrically and magnetically resonant meta-atoms with identical or random sizes, both for the normal and oblique-angle excitations....... We show how the electromagnetic response of metasurfaces is related to the statistical parameters of the structure. Furthermore, we will discuss the phenomenon of anti-resonance in extracted effective parameters of metamaterials and clarify its relation to the periodicity (or amorphous nature...

  15. Analytical connection between thresholds and immunization strategies of SIS model in random networks

    Science.gov (United States)

    Zhou, Ming-Yang; Xiong, Wen-Man; Liao, Hao; Wang, Tong; Wei, Zong-Wen; Fu, Zhong-Qian

    2018-05-01

    Devising effective strategies for hindering the propagation of viruses and protecting the population against epidemics is critical for public security and health. Despite a number of studies based on the susceptible-infected-susceptible (SIS) model devoted to this topic, we still lack a general framework to compare different immunization strategies in completely random networks. Here, we address this problem by suggesting a novel method based on heterogeneous mean-field theory for the SIS model. Our method builds the relationship between the thresholds and different immunization strategies in completely random networks. Besides, we provide an analytical argument that the targeted large-degree strategy achieves the best performance in random networks with arbitrary degree distribution. Moreover, the experimental results demonstrate the effectiveness of the proposed method in both artificial and real-world networks.

  16. Random detailed model for probabilistic neutronic calculation in pebble bed Very High Temperature Reactors

    International Nuclear Information System (INIS)

    Perez Curbelo, J.; Rosales, J.; Garcia, L.; Garcia, C.; Brayner, C.

    2013-01-01

    The pebble bed nuclear reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed type HTRs, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. Pebble bed reactors (PBR) offer the opportunity to meet the sustainability requirements, such as nuclear safety, economic competitiveness, proliferation resistance and a minimal production of radioactive waste. In order to simulate PBRs correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCPN code. The main reason is that there is a limited number of cells and surfaces to be defined. In this study, a computational tool which allows getting a new geometrical model of fuel pebbles for neutronic calculations with MCNPX code, was developed. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. Four proposed fuel pebble models were compared regarding their effective multiplication factor and energy liberation profiles. Such models are: Homogeneous Pebble, Five Zone Homogeneous Pebble, Detailed Geometry, and Randomly Detailed Geometry. (Author)

  17. The dilute random field Ising model by finite cluster approximation

    International Nuclear Information System (INIS)

    Benyoussef, A.; Saber, M.

    1987-09-01

    Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs

  18. Animal models of myasthenia gravis: utility and limitations

    Science.gov (United States)

    Mantegazza, Renato; Cordiglieri, Chiara; Consonni, Alessandra; Baggi, Fulvio

    2016-01-01

    Myasthenia gravis (MG) is a chronic autoimmune disease caused by the immune attack of the neuromuscular junction. Antibodies directed against the acetylcholine receptor (AChR) induce receptor degradation, complement cascade activation, and postsynaptic membrane destruction, resulting in functional reduction in AChR availability. Besides anti-AChR antibodies, other autoantibodies are known to play pathogenic roles in MG. The experimental autoimmune MG (EAMG) models have been of great help over the years in understanding the pathophysiological role of specific autoantibodies and T helper lymphocytes and in suggesting new therapies for prevention and modulation of the ongoing disease. EAMG can be induced in mice and rats of susceptible strains that show clinical symptoms mimicking the human disease. EAMG models are helpful for studying both the muscle and the immune compartments to evaluate new treatment perspectives. In this review, we concentrate on recent findings on EAMG models, focusing on their utility and limitations. PMID:27019601

  19. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon

    2014-01-01

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  20. Application of the load flow and random flow models for the analysis of power transmission networks

    International Nuclear Information System (INIS)

    Zio, Enrico; Piccinelli, Roberta; Delfanti, Maurizio; Olivieri, Valeria; Pozzi, Mauro

    2012-01-01

    In this paper, the classical load flow model and the random flow model are considered for analyzing the performance of power transmission networks. The analysis concerns both the system performance and the importance of the different system elements; this latter is computed by power flow and random walk betweenness centrality measures. A network system from the literature is analyzed, representing a simple electrical power transmission network. The results obtained highlight the differences between the LF “global approach” to flow dispatch and the RF local approach of randomized node-to-node load transfer. Furthermore, computationally the LF model is less consuming than the RF model but problems of convergence may arise in the LF calculation.

  1. Modeling Substrate Utilization, Metabolite Production, and Uranium Immobilization in Shewanella oneidensis Biofilms

    Directory of Open Access Journals (Sweden)

    Ryan S. Renslow

    2017-06-01

    Full Text Available In this study, we developed a two-dimensional mathematical model to predict substrate utilization and metabolite production rates in Shewanella oneidensis MR-1 biofilm in the presence and absence of uranium (U. In our model, lactate and fumarate are used as the electron donor and the electron acceptor, respectively. The model includes the production of extracellular polymeric substances (EPS. The EPS bound to the cell surface and distributed in the biofilm were considered bound EPS (bEPS and loosely associated EPS (laEPS, respectively. COMSOL® Multiphysics finite element analysis software was used to solve the model numerically (model file provided in the Supplementary Material. The input variables of the model were the lactate, fumarate, cell, and EPS concentrations, half saturation constant for fumarate, and diffusion coefficients of the substrates and metabolites. To estimate unknown parameters and calibrate the model, we used a custom designed biofilm reactor placed inside a nuclear magnetic resonance (NMR microimaging and spectroscopy system and measured substrate utilization and metabolite production rates. From these data we estimated the yield coefficients, maximum substrate utilization rate, half saturation constant for lactate, stoichiometric ratio of fumarate and acetate to lactate and stoichiometric ratio of succinate to fumarate. These parameters are critical to predicting the activity of biofilms and are not available in the literature. Lastly, the model was used to predict uranium immobilization in S. oneidensis MR-1 biofilms by considering reduction and adsorption processes in the cells and in the EPS. We found that the majority of immobilization was due to cells, and that EPS was less efficient at immobilizing U. Furthermore, most of the immobilization occurred within the top 10 μm of the biofilm. To the best of our knowledge, this research is one of the first biofilm immobilization mathematical models based on experimental

  2. Topics in random walks in random environment

    International Nuclear Information System (INIS)

    Sznitman, A.-S.

    2004-01-01

    Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)

  3. Does quasi-long-range order in the two-dimensional XY model really survive weak random phase fluctuations?

    International Nuclear Information System (INIS)

    Mudry, Christopher; Wen Xiaogang

    1999-01-01

    Effective theories for random critical points are usually non-unitary, and thus may contain relevant operators with negative scaling dimensions. To study the consequences of the existence of negative-dimensional operators, we consider the random-bond XY model. It has been argued that the XY model on a square lattice, when weakly perturbed by random phases, has a quasi-long-range ordered phase (the random spin wave phase) at sufficiently low temperatures. We show that infinitely many relevant perturbations to the proposed critical action for the random spin wave phase were omitted in all previous treatments. The physical origin of these perturbations is intimately related to the existence of broadly distributed correlation functions. We find that those relevant perturbations do enter the Renormalization Group equations, and affect critical behavior. This raises the possibility that the random XY model has no quasi-long-range ordered phase and no Kosterlitz-Thouless (KT) phase transition

  4. Inference of a random potential from random walk realizations: Formalism and application to the one-dimensional Sinai model with a drift

    International Nuclear Information System (INIS)

    Cocco, S; Monasson, R

    2009-01-01

    We consider the Sinai model, in which a random walker moves in a random quenched potential V, and ask the following questions: 1. how can the quenched potential V be inferred from the observations of one or more realizations of the random motion? 2. how many observations (walks) are required to make a reliable inference, that is, to be able to distinguish between two similar but distinct potentials, V 1 and V 2 ? We show how question 1 can be easily solved within the Bayesian framework. In addition, we show that the answer to question 2 is, in general, intimately connected to the calculation of the survival probability of a fictitious walker in a potential W defined from V 1 and V 2 , with partial absorption at sites where V 1 and V 2 do not coincide. For the one-dimensional Sinai model, this survival probability can be analytically calculated, in excellent agreement with numerical simulations.

  5. Criticality of the random-site Ising model: Metropolis, Swendsen-Wang and Wolff Monte Carlo algorithms

    Directory of Open Access Journals (Sweden)

    D.Ivaneyko

    2005-01-01

    Full Text Available We apply numerical simulations to study of the criticality of the 3D Ising model with random site quenched dilution. The emphasis is given to the issues not being discussed in detail before. In particular, we attempt a comparison of different Monte Carlo techniques, discussing regions of their applicability and advantages/disadvantages depending on the aim of a particular simulation set. Moreover, besides evaluation of the critical indices we estimate the universal ratio Γ+/Γ- for the magnetic susceptibility critical amplitudes. Our estimate Γ+/Γ- = 1.67 ± 0.15 is in a good agreement with the recent MC analysis of the random-bond Ising model giving further support that both random-site and random-bond dilutions lead to the same universality class.

  6. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    Science.gov (United States)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  7. Modeling of ultrasonic processes utilizing a generic software framework

    Science.gov (United States)

    Bruns, P.; Twiefel, J.; Wallaschek, J.

    2017-06-01

    Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.

  8. Entropy-optimal weight constraint elicitation with additive multi-attribute utility models

    NARCIS (Netherlands)

    Valkenhoef , van Gert; Tervonen, Tommi

    2016-01-01

    We consider the elicitation of incomplete preference information for the additive utility model in terms of linear constraints on the weights. Eliciting incomplete preferences using holistic pair-wise judgments is convenient for the decision maker, but selecting the best pair-wise comparison is

  9. On the Path to SunShot - Utility Regulatory Business Model Reforms forAddressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2016-05-01

    Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.

  10. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  11. Random effects model for the reliability management of modules of a fighter aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, So Young [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: sohns@yonsei.ac.kr; Yoon, Kyung Bok [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: ykb@yonsei.ac.kr; Chang, In Sang [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: isjang@yonsei.ac.kr

    2006-04-15

    The operational availability of fighter aircrafts plays an important role in the national defense. Low operational availability of fighter aircrafts can cause many problems and ROKA (Republic of Korea Airforce) needs proper strategies to improve the current practice of reliability management by accurately forecasting both MTBF (mean time between failure) and MTTR (mean time to repair). In this paper, we develop a random effects model to forecast both MTBF and MTTR of installed modules of fighter aircrafts based on their characteristics and operational conditions. Advantage of using such a random effects model is the ability of accommodating not only the individual characteristics of each module and operational conditions but also the uncertainty caused by random error that cannot be explained by them. Our study is expected to contribute to ROKA in improving operational availability of fighter aircrafts and establishing effective logistics management.

  12. Logging utilization in Idaho: Current and past trends

    Science.gov (United States)

    Eric A. Simmons; Todd A. Morgan; Erik C. Berg; Stanley J. Zarnoch; Steven W. Hayes; Mike T. Thompson

    2014-01-01

    A study of commercial timber-harvesting activities in Idaho was conducted during 2008 and 2011 to characterize current tree utilization, logging operations, and changes from previous Idaho logging utilization studies. A two-stage simple random sampling design was used to select sites and felled trees for measurement within active logging sites. Thirty-three logging...

  13. Simulating Urban Growth Using a Random Forest-Cellular Automata (RF-CA Model

    Directory of Open Access Journals (Sweden)

    Courage Kamusoko

    2015-04-01

    Full Text Available Sustainable urban planning and management require reliable land change models, which can be used to improve decision making. The objective of this study was to test a random forest-cellular automata (RF-CA model, which combines random forest (RF and cellular automata (CA models. The Kappa simulation (KSimulation, figure of merit, and components of agreement and disagreement statistics were used to validate the RF-CA model. Furthermore, the RF-CA model was compared with support vector machine cellular automata (SVM-CA and logistic regression cellular automata (LR-CA models. Results show that the RF-CA model outperformed the SVM-CA and LR-CA models. The RF-CA model had a Kappa simulation (KSimulation accuracy of 0.51 (with a figure of merit statistic of 47%, while SVM-CA and LR-CA models had a KSimulation accuracy of 0.39 and −0.22 (with figure of merit statistics of 39% and 6%, respectively. Generally, the RF-CA model was relatively accurate at allocating “non-built-up to built-up” changes as reflected by the correct “non-built-up to built-up” components of agreement of 15%. The performance of the RF-CA model was attributed to the relatively accurate RF transition potential maps. Therefore, this study highlights the potential of the RF-CA model for simulating urban growth.

  14. High-utilizing Crohn's disease patients under psychosomatic therapy*

    Directory of Open Access Journals (Sweden)

    Jantschek Günther

    2008-10-01

    Full Text Available Abstract Objective Few studies have been published on health care utilization in Crohn's disease and the influence of psychological treatment on high utilizers. Methods The present sub study of a prospective multi center investigation conducted in 87 of 488 consecutive Crohn's disease (CD patients was designed to investigate the influence of the course of Crohn's disease on health care utilization (hospital days (HD and sick leave days (SLD collected by German insurance companies and to examine the conditions of high-utilizing patients. Predictors of health care utilization should be selected. Based on a standardized somatic treatment, high health care utilizing patients of the psychotherapy and control groups should be compared before and after a one-year treatment. Results Multivariate regression analysis identified disease activity at randomization as an important predictor of the clinical course (r2 = 0.28, p 2 = 0.15, p = 0.09. The patients' level of anxiety, depression and lack of control at randomization predicted their health-related quality of life at the end of the study (r2 = 0.51, p 2 = 0.22, p Among high utilizers, a significantly greater drop in HD (p Conclusion The course of Crohn's disease is influenced by psychological as well as somatic factors; especially depression seems important here. A significant drop of health care utilization demonstrates the benefit of psychological treatment in the subgroup of high-utilizing CD patients. Further studies are needed to replicate the findings of the clinical outcome in this CD subgroup.

  15. Statistical shape model with random walks for inner ear segmentation

    DEFF Research Database (Denmark)

    Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

    2016-01-01

    is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...

  16. Conditional Random Fields versus Hidden Markov Models for activity recognition in temporal sensor data

    NARCIS (Netherlands)

    van Kasteren, T.L.M.; Noulas, A.K.; Kröse, B.J.A.; Smit, G.J.M.; Epema, D.H.J.; Lew, M.S.

    2008-01-01

    Conditional Random Fields are a discriminative probabilistic model which recently gained popularity in applications that require modeling nonindependent observation sequences. In this work, we present the basic advantages of this model over generative models and argue about its suitability in the

  17. Fluoxetine and imipramine: are there differences in cost-utility for depression in primary care?

    Science.gov (United States)

    Serrano-Blanco, Antoni; Suárez, David; Pinto-Meza, Alejandra; Peñarrubia, Maria T; Haro, Josep Maria

    2009-02-01

    Depressive disorders generate severe personal burden and high economic costs. Cost-utility analyses of the different therapeutical options are crucial to policy-makers and clinicians. Previous cost-utility studies, comparing selective serotonin reuptake inhibitors and tricyclic antidepressants, have used modelling techniques or have not included indirect costs in the economic analyses. To determine the cost-utility of fluoxetine compared with imipramine for treating depressive disorders in primary care. A 6-month randomized prospective naturalistic study comparing fluoxetine with imipramine was conducted in three primary care centres in Spain. One hundred and three patients requiring antidepressant treatment for a DSM-IV depressive disorder were included in the study. Patients were randomized either to fluoxetine (53 patients) or to imipramine (50 patients) treatment. Patients were treated with antidepressants according to their general practitioner's usual clinical practice. Outcome measures were the quality of life tariff of the European Quality of Life Questionnaire: EuroQoL-5D (five domains), direct costs, indirect costs and total costs. Subjects were evaluated at the beginning of treatment and after 1, 3 and 6 months. Incremental cost-utility ratios (ICUR) were obtained. To address uncertainty in the ICUR's sampling distribution, non-parametric bootstrapping was carried out. Taking into account adjusted total costs and incremental quality of life gained, imipramine dominated fluoxetine with 81.5% of the bootstrap replications in the dominance quadrant. Imipramine seems to be a better cost-utility antidepressant option for treating depressive disorders in primary care.

  18. An explicit semantic relatedness measure based on random walk

    Directory of Open Access Journals (Sweden)

    HU Sihui

    2016-10-01

    Full Text Available The semantic relatedness calculation of open domain knowledge network is a significant issue.In this paper,pheromone strategy is drawn from the thought of ant colony algorithm and is integrated into the random walk which is taken as the basic framework of calculating the semantic relatedness degree.The pheromone distribution is taken as a criterion of determining the tightness degree of semantic relatedness.A method of calculating semantic relatedness degree based on random walk is proposed and the exploration process of calculating the semantic relatedness degree is presented in a dominant way.The method mainly contains Path Select Model(PSM and Semantic Relatedness Computing Model(SRCM.PSM is used to simulate the path selection of ants and pheromone release.SRCM is used to calculate the semantic relatedness by utilizing the information returned by ants.The result indicates that the method could complete semantic relatedness calculation in linear complexity and extend the feasible strategy of semantic relatedness calculation.

  19. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models

    Science.gov (United States)

    Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J. I.; Zoller, P.

    2018-02-01

    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.

  20. Utilizing Data Mining for Predictive Modeling of Colorectal Cancer using Electronic Medical Records

    NARCIS (Netherlands)

    Hoogendoorn, M.; Moons, L.G.; Numans, M.E.; Sips, R.J.

    2014-01-01

    Colorectal cancer (CRC) is a relatively common cause of death around the globe. Predictive models for the development of CRC could be highly valuable and could facilitate an early diagnosis and increased survival rates. Currently available predictive models are improving, but do not fully utilize

  1. Potts Model with Invisible Colors : Random-Cluster Representation and Pirogov–Sinai Analysis

    NARCIS (Netherlands)

    Enter, Aernout C.D. van; Iacobelli, Giulio; Taati, Siamak

    We study a recently introduced variant of the ferromagnetic Potts model consisting of a ferromagnetic interaction among q “visible” colors along with the presence of r non-interacting “invisible” colors. We introduce a random-cluster representation for the model, for which we prove the existence of

  2. Randomized random walk on a random walk

    International Nuclear Information System (INIS)

    Lee, P.A.

    1983-06-01

    This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)

  3. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  4. Kinetics of transformations nucleated on random parallel planes: analytical modelling and computer simulation

    International Nuclear Information System (INIS)

    Rios, Paulo R; Assis, Weslley L S; Ribeiro, Tatiana C S; Villa, Elena

    2012-01-01

    In a classical paper, Cahn derived expressions for the kinetics of transformations nucleated on random planes and lines. He used those as a model for nucleation on the boundaries, edges and vertices of a polycrystal consisting of equiaxed grains. In this paper it is demonstrated that Cahn's expression for random planes may be used in situations beyond the scope envisaged in Cahn's original paper. For instance, we derived an expression for the kinetics of transformations nucleated on random parallel planes that is identical to that formerly obtained by Cahn considering random planes. Computer simulation of transformations nucleated on random parallel planes is carried out. It is shown that there is excellent agreement between simulated results and analytical solutions. Such an agreement is to be expected if both the simulation and the analytical solution are correct. (paper)

  5. Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Daniel Fernando Tello Gamarra

    2016-12-01

    Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.

  6. 93-106, 2015 93 Multilevel random effect and marginal models

    African Journals Online (AJOL)

    injected by the candidate vaccine have a lower or higher risk for the occurrence of ... outcome relationship and test whether subjects inject- ... contains an agent that resembles a disease-causing ... to have different random effect variability at each cat- ... In the marginal models settings, the responses are ... Behavior as usual.

  7. Gravitational lensing by eigenvalue distributions of random matrix models

    Science.gov (United States)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  8. Random resistor network model of minimal conductivity in graphene.

    Science.gov (United States)

    Cheianov, Vadim V; Fal'ko, Vladimir I; Altshuler, Boris L; Aleiner, Igor L

    2007-10-26

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  9. A signal theoretic introduction to random processes

    CERN Document Server

    Howard, Roy M

    2015-01-01

    A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features:  A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of

  10. Derrida's Generalized Random Energy models; 4, Continuous state branching and coalescents

    CERN Document Server

    Bovier, A

    2003-01-01

    In this paper we conclude our analysis of Derrida's Generalized Random Energy Models (GREM) by identifying the thermodynamic limit with a one-parameter family of probability measures related to a continuous state branching process introduced by Neveu. Using a construction introduced by Bertoin and Le Gall in terms of a coherent family of subordinators related to Neveu's branching process, we show how the Gibbs geometry of the limiting Gibbs measure is given in terms of the genealogy of this process via a deterministic time-change. This construction is fully universal in that all different models (characterized by the covariance of the underlying Gaussian process) differ only through that time change, which in turn is expressed in terms of Parisi's overlap distribution. The proof uses strongly the Ghirlanda-Guerra identities that impose the structure of Neveu's process as the only possible asymptotic random mechanism.

  11. Utility Computing: Reality and Beyond

    Science.gov (United States)

    Ivanov, Ivan I.

    Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?

  12. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  13. Using observation-level random effects to model overdispersion in count data in ecology and evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2014-10-01

    Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.

  14. Animal Models Utilized in HTLV-1 Research

    Directory of Open Access Journals (Sweden)

    Amanda R. Panfil

    2013-01-01

    Full Text Available Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1 over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP. Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, “humanized” mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.

  15. On the design of henon and logistic map-based random number generator

    Science.gov (United States)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  16. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  17. Phase structure of the O(n) model on a random lattice for n > 2

    DEFF Research Database (Denmark)

    Durhuus, B.; Kristjansen, C.

    1997-01-01

    We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1....../2 or there exists a dual critical point with negative string susceptibility exponent, γ̃, related to γ by γ = γ̃/γ̃-1. Exploiting the exact solution of the O(n) model on a random lattice we show that both situations are realized for n > 2 and that the possible dual pairs of string susceptibility exponents are given...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....

  18. Animal models of GM2 gangliosidosis: utility and limitations

    Science.gov (United States)

    Lawson, Cheryl A; Martin, Douglas R

    2016-01-01

    GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. PMID:27499644

  19. Modeling the development and utilization of bioenergy and exploring the environmental economic benefits

    International Nuclear Information System (INIS)

    Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en

    2015-01-01

    Highlights: • A complete bioenergy flow is schemed to industrialize bioenergy utilization. • An input–output optimization simulation model is developed. • Energy supply and demand and bioenergy industries’ development are optimized. • Carbon tax and subsidies are endogenously derived by the model. • Environmental economic benefits of bioenergy utilization are explored dynamically. - Abstract: This paper outlines a complete bioenergy flow incorporating bioresource procurement, feedstock supply, conversion technologies and energy consumption to industrialize the development and utilization of bioenergy. An input–output optimization simulation model is developed to introduce bioenergy industries into the regional socioeconomy and energy production and consumption system and dynamically explore the economic, energy and environmental benefits. 16-term simulation from 2010 to 2025 is performed in scenarios preset based on bioenergy industries, carbon tax-subsidization policy and distinct levels of greenhouse gas emission constraints. An empirical study is conducted to validate and apply the model. In the optimal scenario, both industrial development and energy supply and demand are optimized contributing to a 8.41% average gross regional product growth rate and a 39.9% reduction in accumulative greenhouse gas emission compared with the base scenario. By 2025 the consumption ratio of bioenergy in total primary energy could be increased from 0.5% to 8.2%. Energy self-sufficiency rate could be increased from 57.7% to 77.9%. A dynamic carbon tax rate and the extent to which bioenergy industrial development could be promoted are also elaborated. Regional economic development and greenhouse gas mitigation can be potentially promoted simultaneously by bioenergy utilization and a proper greenhouse gas emission constraint. The methodology presented is capable of introducing new industries or policies related to energy planning and detecting the best tradeoffs of

  20. A Multidisciplinary Intervention Utilizing Virtual Communication Tools to Reduce Health Disparities: A Pilot Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    John F. Emerson

    2015-12-01

    , participants had a median of nine total documented contacts with PCMH providers compared to four in the control group. Three intervention and two control participants had controlled diabetes (hemoglobin A1C <9%. Multidisciplinary care that utilizes health coach-facilitated virtual visits is an intervention that could increase access to intensive primary care services in a vulnerable population. The methods tested are feasible and should be tested in a pragmatic randomized controlled trial to evaluate the impact on patient-relevant outcomes across multiple chronic diseases.

  1. Optimal urban water conservation strategies considering embedded energy: coupling end-use and utility water-energy models.

    Science.gov (United States)

    Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Spang, E. S.; Loge, F. J.

    2014-12-01

    Although most freshwater resources are used in agriculture, a greater amount of energy is consumed per unit of water supply for urban areas. Therefore, efforts to reduce the carbon footprint of water in cities, including the energy embedded within household uses, can be an order of magnitude larger than for other water uses. This characteristic of urban water systems creates a promising opportunity to reduce global greenhouse gas emissions, particularly given rapidly growing urbanization worldwide. Based on a previous Water-Energy-CO2 emissions model for household water end uses, this research introduces a probabilistic two-stage optimization model considering technical and behavioral decision variables to obtain the most economical strategies to minimize household water and water-related energy bills given both water and energy price shocks. Results show that adoption rates to reduce energy intensive appliances increase significantly, resulting in an overall 20% growth in indoor water conservation if household dwellers include the energy cost of their water use. To analyze the consequences on a utility-scale, we develop an hourly water-energy model based on data from East Bay Municipal Utility District in California, including the residential consumption, obtaining that water end uses accounts for roughly 90% of total water-related energy, but the 10% that is managed by the utility is worth over 12 million annually. Once the entire end-use + utility model is completed, several demand-side management conservation strategies were simulated for the city of San Ramon. In this smaller water district, roughly 5% of total EBMUD water use, we found that the optimal household strategies can reduce total GHG emissions by 4% and utility's energy cost over 70,000/yr. Especially interesting from the utility perspective could be the "smoothing" of water use peaks by avoiding daytime irrigation that among other benefits might reduce utility energy costs by 0.5% according to our

  2. The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.

    Science.gov (United States)

    Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot

    2006-05-01

    Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.

  3. Random defect lines in conformal minimal models

    International Nuclear Information System (INIS)

    Jeng, M.; Ludwig, A.W.W.

    2001-01-01

    We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder

  4. Phase transitions in the random field Ising model in the presence of a transverse field

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, A.; Chakrabarti, B.K. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Stinchcombe, R.B. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Department of Physics, Oxford (United Kingdom)

    1996-09-07

    We have studied the phase transition behaviour of the random field Ising model in the presence of a transverse (or tunnelling) field. The mean field phase diagram has been studied in detail, and in particular the nature of the transition induced by the tunnelling (transverse) field at zero temperature. Modified hyper-scaling relation for the zero-temperature transition has been derived using the Suzuki-Trotter formalism and a modified 'Harris criterion'. Mapping of the model to a randomly diluted antiferromagnetic Ising model in uniform longitudinal and transverse field is also given. (author)

  5. Communication Outreach Strategies Utilized By Agricultural ...

    African Journals Online (AJOL)

    The study focused on communication outreach strategies utilized by agricultural extension agents in the Imo State Agriculural Development Programme, Nigeria. Data was obtained form 60 randomly selected agricultural extension agents from the study area. Data were analysed using frequency, percentage and mean ...

  6. Healthcare utilization in adults with opioid dependence receiving extended release naltrexone compared to treatment as usual.

    Science.gov (United States)

    Soares, William E; Wilson, Donna; Rathlev, Niels; Lee, Joshua D; Gordon, Michael; Nunes, Edward V; O'Brien, Charles P; Friedmann, Peter D

    2018-02-01

    Opioid use disorders have reached epidemic proportions, with overdose now the leading cause of accidental death in the United States. Extended release naltrexone (XR-NTX) has emerged as a medication treatment that reduces opioid use and craving. However, the effect of XR-NTX therapy on acute healthcare utilization, including emergency department visits and inpatient hospitalizations, remains uncertain. The objective of the current study is to evaluate hospital-based healthcare resource utilization in adults involved in the criminal justice system with a history of opioid use disorder randomized to XR-NTX therapy compared with treatment as usual (TAU) during a 6-month treatment phase and 12months post-treatment follow up. This retrospective exploratory analysis uses data collected in a published randomized trial. Comparisons of the number of emergency department visits and hospital admissions (for drug detox, psychiatric care and other medical reasons) were performed using chi square tests for any admission and negative binomial models for number of admissions. Of the 308 participants randomized, 96% had utilization data (76% complete 6months, 67% complete follow up). No significant differences were seen in overall healthcare utilization (IRR=0.88, 95%CI 0.63-1.23, p=0.45), or substance use-related drug detox hospitalizations (IRR=0.83, 95%CI 0.32-2.16, p=0.71). Despite having more participants report chronic medical problems at baseline (43% vs. 32%, p=0.05), those receiving XR-NTX generally experienced equivalent or lower rates of healthcare utilization compared to TAU. The XR-NTX group had significantly lower medical/surgical related hospital admissions (IRR=0.55, 95%CI 0.30-1.00, p=0.05) during the course of the entire study. XR-NTX did not significantly increase rates of healthcare utilization compared to TAU. Provider concerns regarding healthcare utilization should not preclude the consideration of XR-NTX as therapy for opioid use disorders. Copyright © 2018

  7. Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Souvik [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Nguyen, Thuy-Ai D. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Gan, Lu [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Jones, Anne K. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA

    2015-01-01

    Peptide based models for [FeFe]-hydrogenase were synthesized utilizing unnatural phosphine-amino acids and their electrocatalytic properties were investigated in mixed aqueous-organic solvents.

  8. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  9. The Utility of Cognitive Plausibility in Language Acquisition Modeling: Evidence From Word Segmentation.

    Science.gov (United States)

    Phillips, Lawrence; Pearl, Lisa

    2015-11-01

    The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.

  10. A Random-Walk-Model for heavy metal particles in natural waters; Ein Random-Walk-Modell fuer Schwermetallpartikel in natuerlichen Gewaessern

    Energy Technology Data Exchange (ETDEWEB)

    Wollschlaeger, A.

    1996-12-31

    The presented particle tracking model is for the numerical calculation of heavy metal transport in natural waters. The Navier-Stokes-Equations are solved with the Finite-Element-Method. The advective movement of the particles is interpolated from the velocities on the discrete mesh. The influence of turbulence is simulated with a Random-Walk-Model where particles are distributed due to a given probability function. Both parts are added and lead to the new particle position. The characteristics of the heavy metals are assigned to the particules as their attributes. Dissolved heavy metals are transported only by the flow. Heavy metals which are bound to particulate matter have an additional settling velocity. The sorption and the remobilization processes are approximated through a probability law which maintains the proportionality ratio between dissolved heavy metals and those which are bound to particulate matter. At the bed heavy metals bound to particulate matter are subjected to deposition and erosion processes. The model treats these processes by considering the absorption intensity of the heavy metals to the bottom sediments. Calculations of the Weser estuary show that the particle tracking model allows the simulation of the heavy metal behaviour even under complex flow conditions. (orig.) [Deutsch] Das vorgestellte Partikelmodell dient zur numerischen Berechnung des Schwermetalltransports in natuerlichen Gewaessern. Die Navier-Stokes-Gleichungen werden mit der Methode der Finiten Elemente geloest. Die advektive Bewegung der Teilchen ergibt sich aus der Interpolation der Geschwindigkeiten auf dem diskreten Netz. Der Einfluss der Turbulenz wird mit einem Random-Walk-Modell simuliert, bei dem sich die Partikel anhand einer vorgegebenen Wahrscheinlichkeitsfunktion verteilen. Beide Bewegungsanteile werden zusammengefasst und ergeben die neue Partikelposition. Die Eigenschaften der Schwermetalle werden den Partikeln als Attribute zugeordnet. Geloeste Schwermetalle

  11. Modeling of reactive chemical transport of leachates from a utility fly-ash disposal site

    International Nuclear Information System (INIS)

    Apps, J.A.; Zhu, M.; Kitanidis, P.K.; Freyberg, D.L.; Ronan, A.D.; Itakagi, S.

    1991-04-01

    Fly ash from fossil-fuel power plants is commonly slurried and pumped to disposal sites. The utility industry is interested in finding out whether any hazardous constituents might leach from the accumulated fly ash and contaminate ground and surface waters. To evaluate the significance of this problem, a representative site was selected for modeling. FASTCHEM, a computer code developed for the Electric Power Research Institute, was utilized for the simulation of the transport and fate of the fly-ash leachate. The chemical evolution of the leachate was modeled as it migrated along streamtubes defined by the flow model. The modeling predicts that most of the leachate seeps through the dam confining the ash pond. With the exception of ferrous, manganous, sulfate and small amounts of nickel ions, all other dissolved constituents are predicted to discharge at environmentally acceptable concentrations

  12. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  13. Analysis of time to event outcomes in randomized controlled trials by generalized additive models.

    Directory of Open Access Journals (Sweden)

    Christos Argyropoulos

    Full Text Available Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and

  14. Key data elements for use in cost-utility modeling of biological treatments for rheumatoid arthritis.

    Science.gov (United States)

    Ganz, Michael L; Hansen, Brian Bekker; Valencia, Xavier; Strandberg-Larsen, Martin

    2015-05-01

    Economic evaluation is becoming more common and important as new biologic therapies for rheumatoid arthritis (RA) are developed. While much has been published about how to design cost-utility models for RA to conduct these evaluations, less has been written about the sources of data populating those models. The goal is to review the literature and to provide recommendations for future data collection efforts. This study reviewed RA cost-utility models published between January 2006 and February 2014 focusing on five key sources of data (health-related quality-of-life and utility, clinical outcomes, disease progression, course of treatment, and healthcare resource use and costs). It provided recommendations for collecting the appropriate data during clinical and other studies to support modeling of biologic treatments for RA. Twenty-four publications met the selection criteria. Almost all used two steps to convert clinical outcomes data to utilities rather than more direct methods; most did not use clinical outcomes measures that captured absolute levels of disease activity and physical functioning; one-third of them, in contrast with clinical reality, assumed zero disease progression for biologic-treated patients; little more than half evaluated courses of treatment reflecting guideline-based or actual clinical care; and healthcare resource use and cost data were often incomplete. Based on these findings, it is recommended that future studies collect clinical outcomes and health-related quality-of-life data using appropriate instruments that can convert directly to utilities; collect data on actual disease progression; be designed to capture real-world courses of treatment; and collect detailed data on a wide range of healthcare resources and costs.

  15. Learning of couplings for random asymmetric kinetic Ising models revisited: random correlation matrices and learning curves

    International Nuclear Information System (INIS)

    Bachschmid-Romano, Ludovica; Opper, Manfred

    2015-01-01

    We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)

  16. A Study of How the Watts-Strogatz Model Relates to an Economic System’s Utility

    Directory of Open Access Journals (Sweden)

    Lunhan Luo

    2014-01-01

    Full Text Available Watts-Strogatz model is a main mechanism to construct the small-world networks. It is widely used in the simulations of small-world featured systems including economic system. Formally, the model contains a parameters set including three variables representing group size, number of neighbors, and rewiring probability. This paper discusses how the parameters set relates to the economic system performance which is utility growth rate. In conclusion, it is found that, regardless of the group size and rewiring probability, 2 to 18 neighbors can help the economic system reach the highest utility growth rate. Furthermore, given the range of neighbors and group size of a Watts-Strogatz model based system, the range of its edges can be calculated too. By examining the containment relationship between that range and the edge number of an actual equal-size economic system, we could know whether the system structure has redundant edges or can achieve the highest utility growth ratio.

  17. Activated aging dynamics and effective trap model description in the random energy model

    Science.gov (United States)

    Baity-Jesi, M.; Biroli, G.; Cammarota, C.

    2018-01-01

    We study the out-of-equilibrium aging dynamics of the random energy model (REM) ruled by a single spin-flip Metropolis dynamics. We focus on the dynamical evolution taking place on time-scales diverging with the system size. Our aim is to show to what extent the activated dynamics displayed by the REM can be described in terms of an effective trap model. We identify two time regimes: the first one corresponds to the process of escaping from a basin in the energy landscape and to the subsequent exploration of high energy configurations, whereas the second one corresponds to the evolution from a deep basin to the other. By combining numerical simulations with analytical arguments we show why the trap model description does not hold in the former but becomes exact in the second.

  18. Some results of the spectra of random Schroedinger operators and their application to random point interaction models in one and three dimensions

    International Nuclear Information System (INIS)

    Kirsch, W.; Martinelli, F.

    1981-01-01

    After the derivation of weak conditions under which the potential for the Schroedinger operator is well defined the authers state an ergodicity assumption of this potential which ensures that the spectrum of this operator is a fixed non random set. Then random point interaction Hamiltonians are considered in this framework. Finally the authors consider a model where for sufficiently small fluctuations around the equilibrium positions a finite number of gaps appears. (HSI)

  19. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  20. Random Walk Model for the Growth of Monolayer in Dip Pen Nanolithography

    International Nuclear Information System (INIS)

    Kim, H; Ha, S; Jang, J

    2013-01-01

    By using a simple random-walk model, we simulate the growth of a self-assembled monolayer (SAM) pattern generated in dip pen nanolithography (DPN). In this model, the SAM pattern grows mainly via the serial pushing of molecules deposited from the tip. We examine various SAM patterns, such as lines, crosses, and letters by changing the tip scan speed.

  1. Transverse spin correlations of the random transverse-field Ising model

    Science.gov (United States)

    Iglói, Ferenc; Kovács, István A.

    2018-03-01

    The critical behavior of the random transverse-field Ising model in finite-dimensional lattices is governed by infinite disorder fixed points, several properties of which have already been calculated by the use of the strong disorder renormalization-group (SDRG) method. Here we extend these studies and calculate the connected transverse-spin correlation function by a numerical implementation of the SDRG method in d =1 ,2 , and 3 dimensions. At the critical point an algebraic decay of the form ˜r-ηt is found, with a decay exponent being approximately ηt≈2 +2 d . In d =1 the results are related to dimer-dimer correlations in the random antiferromagnetic X X chain and have been tested by numerical calculations using free-fermionic techniques.

  2. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  3. Examples of mixed-effects modeling with crossed random effects and with binomial data

    NARCIS (Netherlands)

    Quené, H.; van den Bergh, H.

    2008-01-01

    Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not

  4. Depletion benchmarks calculation of random media using explicit modeling approach of RMC

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2016-01-01

    Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.

  5. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  6. Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees

    Science.gov (United States)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der

    2018-04-01

    We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.

  7. Zero temperature landscape of the random sine-Gordon model

    International Nuclear Information System (INIS)

    Sanchez, A.; Bishop, A.R.; Cai, D.

    1997-01-01

    We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length (∼ 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r) 2 type up to a certain correlation length (ξ ∼ 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis

  8. The Little-Hopfield model on a sparse random graph

    International Nuclear Information System (INIS)

    Castillo, I Perez; Skantzos, N S

    2004-01-01

    We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model). We solve this model within replica symmetry, and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetic transition lines of our phase diagram are identical to those of sequential dynamics. The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement

  9. Comparison of ARIMA and Random Forest time series models for prediction of avian influenza H5N1 outbreaks.

    Science.gov (United States)

    Kane, Michael J; Price, Natalie; Scotch, Matthew; Rabinowitz, Peter

    2014-08-13

    Time series models can play an important role in disease prediction. Incidence data can be used to predict the future occurrence of disease events. Developments in modeling approaches provide an opportunity to compare different time series models for predictive power. We applied ARIMA and Random Forest time series models to incidence data of outbreaks of highly pathogenic avian influenza (H5N1) in Egypt, available through the online EMPRES-I system. We found that the Random Forest model outperformed the ARIMA model in predictive ability. Furthermore, we found that the Random Forest model is effective for predicting outbreaks of H5N1 in Egypt. Random Forest time series modeling provides enhanced predictive ability over existing time series models for the prediction of infectious disease outbreaks. This result, along with those showing the concordance between bird and human outbreaks (Rabinowitz et al. 2012), provides a new approach to predicting these dangerous outbreaks in bird populations based on existing, freely available data. Our analysis uncovers the time-series structure of outbreak severity for highly pathogenic avain influenza (H5N1) in Egypt.

  10. IAPCS: A COMPUTER MODEL THAT EVALUATES POLLUTION CONTROL SYSTEMS FOR UTILITY BOILERS

    Science.gov (United States)

    The IAPCS model, developed by U.S. EPA`s Air and Energy Engineering Research Laboratory and made available to the public through the National Technical Information Service, can be used by utility companies, architectural and engineering companies, and regulatory agencies at all l...

  11. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement.

    Science.gov (United States)

    Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran

    2012-07-01

    1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  12. Anderson localization through Polyakov loops: Lattice evidence and random matrix model

    International Nuclear Information System (INIS)

    Bruckmann, Falk; Schierenberg, Sebastian; Kovacs, Tamas G.

    2011-01-01

    We investigate low-lying fermion modes in SU(2) gauge theory at temperatures above the phase transition. Both staggered and overlap spectra reveal transitions from chaotic (random matrix) to integrable (Poissonian) behavior accompanied by an increasing localization of the eigenmodes. We show that the latter are trapped by local Polyakov loop fluctuations. Islands of such ''wrong'' Polyakov loops can therefore be viewed as defects leading to Anderson localization in gauge theories. We find strong similarities in the spatial profile of these localized staggered and overlap eigenmodes. We discuss possible interpretations of this finding and present a sparse random matrix model that reproduces these features.

  13. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  14. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    Science.gov (United States)

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A distribution-free newsvendor model with balking penalty and random yield

    Directory of Open Access Journals (Sweden)

    Chongfeng Lan

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to extend the analysis of the distribution-free newsvendor problem in an environment of customer balking, which occurs when customers are reluctant to buy a product if its available inventory falls below a threshold level. Design/methodology/approach: We provide a new tradeoff tool as a replacement of the traditional one to weigh the holding cost and the goodwill costs segment: in addition to the shortage penalty, we also introduce the balking penalty. Furthermore, we extend our model to the case of random yield. Findings: A model is presented for determining both an optimal order quantity and a lower bound on the profit under the worst possible distribution of the demand. We also study the effects of shortage penalty and the balking penalty on the optimal order quantity, which have been largely bypassed in the existing distribution free single period models with balking. Numerical examples are presented to illustrate the result. Originality/value: The incorporation of balking penalty and random yield represents an important improvement in inventory policy performance for distribution-free newsvendor problem when customer balking occurs and the distributional form of demand is unknown.

  16. Random maintenance policies

    CERN Document Server

    Nakagawa, Toshio

    2014-01-01

    Exploring random maintenance models, this book provides an introduction to the implementation of random maintenance, and it is one of the first books to be written on this subject.  It aims to help readers learn new techniques for applying random policies to actual reliability models, and it provides new theoretical analyses of various models including classical replacement, preventive maintenance and inspection policies. These policies are applied to scheduling problems, backup policies of database systems, maintenance policies of cumulative damage models, and reliability of random redundant systems. Reliability theory is a major concern for engineers and managers, and in light of Japan’s recent earthquake, the reliability of large-scale systems has increased in importance. This also highlights the need for a new notion of maintenance and reliability theory, and how this can practically be applied to systems. Providing an essential guide for engineers and managers specializing in reliability maintenance a...

  17. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  18. Intimate partner violence in older women in Spain: prevalence, health consequences, and service utilization.

    Science.gov (United States)

    Montero, Isabel; Martín-Baena, David; Escribà-Agüir, Vicenta; Ruiz-Pérez, Isabel; Vives-Cases, Carmen; Talavera, Marta

    2013-01-01

    The purpose of this study is to estimate the prevalence of lifetime intimate partner violence (IPV) in older women and to analyze its effect on women's health and Healthcare Services utilization. Women aged 55 years and over (1,676) randomly sampled from Primary Healthcare Services around Spain were included. Lifetime IPV prevalence, types, and duration were calculated. Descriptive and multivariate procedures using logistic and multiple lineal regression models were used. Of the women studied, 29.4% experienced IPV with an average duration of 21 years. Regardless of the type of IPV experienced, abused women showed significantly poorer health and higher healthcare services utilization compared to women who had never been abused. The high prevalence detected long standing duration, negative health impact, and high healthcare services utilization, calling attention to a need for increased efforts aimed at addressing IPV in older women.

  19. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    Science.gov (United States)

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  20. Modeling regulated water utility investment incentives

    Science.gov (United States)

    Padula, S.; Harou, J. J.

    2014-12-01

    This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing

  1. Levy Random Bridges and the Modelling of Financial Information

    OpenAIRE

    Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea

    2009-01-01

    The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...

  2. Cost-effectiveness and cost-utility of cognitive therapy, rational emotive behavioral therapy, and fluoxetine (Prozac) in treating depression: a randomized clinical trial.

    Science.gov (United States)

    Sava, Florin A; Yates, Brian T; Lupu, Viorel; Szentagotai, Aurora; David, Daniel

    2009-01-01

    Cost-effectiveness and cost-utility of cognitive therapy (CT), rational emotive behavioral therapy (REBT), and fluoxetine (Prozac) for major depressive disorder (MDD) were compared in a randomized clinical trial with a Romanian sample of 170 clients. Each intervention was offered for 14 weeks, plus three booster sessions. Beck Depression Inventory (BDI) scores were obtained prior to intervention, 7 and 14 weeks following the start of intervention, and 6 months following completion of intervention. CT, REBT, and fluoxetine did not differ significantly in changes in the BDI, depression-free days (DFDs), or Quality-Adjusted Life Years (QALYs). Average BDI scores decreased from 31.1 before treatment to 9.7 six months following completion of treatment. Due to lower costs, both psychotherapies were more cost-effective, and had better cost-utility, than pharmacotherapy: median $26.44/DFD gained/month for CT and $23.77/DFD gained/month for REBT versus $34.93/DFD gained/month for pharmacotherapy, median $/QALYs=$1,638, $1,734, and $2,287 for CT, REBT, and fluoxetine (Prozac), respectively. (c) 2008 Wiley Periodicals, Inc.

  3. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  4. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    Science.gov (United States)

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  5. Curvature of random walks and random polygons in confinement

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2013-01-01

    The purpose of this paper is to study the curvature of equilateral random walks and polygons that are confined in a sphere. Curvature is one of several basic geometric properties that can be used to describe random walks and polygons. We show that confinement affects curvature quite strongly, and in the limit case where the confinement diameter equals the edge length the unconfined expected curvature value doubles from π/2 to π. To study curvature a simple model of an equilateral random walk in spherical confinement in dimensions 2 and 3 is introduced. For this simple model we derive explicit integral expressions for the expected value of the total curvature in both dimensions. These expressions are functions that depend only on the radius R of the confinement sphere. We then show that the values obtained by numeric integration of these expressions agrees with numerical average curvature estimates obtained from simulations of random walks. Finally, we compare the confinement effect on curvature of random walks with random polygons. (paper)

  6. Universality of correlation functions in random matrix models of QCD

    International Nuclear Information System (INIS)

    Jackson, A.D.; Sener, M.K.; Verbaarschot, J.J.M.

    1997-01-01

    We demonstrate the universality of the spectral correlation functions of a QCD inspired random matrix model that consists of a random part having the chiral structure of the QCD Dirac operator and a deterministic part which describes a schematic temperature dependence. We calculate the correlation functions analytically using the technique of Itzykson-Zuber integrals for arbitrary complex supermatrices. An alternative exact calculation for arbitrary matrix size is given for the special case of zero temperature, and we reproduce the well-known Laguerre kernel. At finite temperature, the microscopic limit of the correlation functions are calculated in the saddle-point approximation. The main result of this paper is that the microscopic universality of correlation functions is maintained even though unitary invariance is broken by the addition of a deterministic matrix to the ensemble. (orig.)

  7. A randomized trial of maternal influenza immunization decision-making: A test of persuasive messaging models.

    Science.gov (United States)

    Frew, Paula M; Kriss, Jennifer L; Chamberlain, Allison T; Malik, Fauzia; Chung, Yunmi; Cortés, Marielysse; Omer, Saad B

    2016-08-02

    We sought to examine the effectiveness of persuasive communication interventions on influenza vaccination uptake among black/African American pregnant women in Atlanta, Georgia. We recruited black/African American pregnant women ages 18 to 50 y from Atlanta, GA to participate in a prospective, randomized controlled trial of influenza immunization messaging conducted from January to April 2013. Eligible participants were randomized to 3 study arms. We conducted follow-up questionnaires on influenza immunization at 30-days post-partum with all groups. Chi-square and t tests evaluated group differences, and outcome intention-to-treat assessment utilized log-binomial regression models. Of the 106 enrolled, 95 women completed the study (90% retention), of which 31 were randomly assigned to affective messaging intervention ("Pregnant Pause" video), 30 to cognitive messaging intervention ("Vaccines for a Healthy Pregnancy" video), and 34 to a comparison condition (receipt of the Influenza Vaccine Information Statement). The three groups were balanced on baseline demographic characteristics and reported health behaviors. At baseline, most women (63%, n = 60) reported no receipt of seasonal influenza immunization during the previous 5 y. They expressed a low likelihood (2.1 ± 2.8 on 0-10 scale) of obtaining influenza immunization during their current pregnancy. At 30-days postpartum follow-up, influenza immunization was low among all participants (7-13%) demonstrating no effect after a single exposure to either affective messaging (RR = 1.10; 95% CI: 0.30-4.01) or cognitive messaging interventions (RR = 0.57; 95% CI: 0.11-2.88). Women cited various reasons for not obtaining maternal influenza immunizations. These included concern about vaccine harm (47%, n = 40), low perceived influenza infection risk (31%, n = 26), and a history of immunization nonreceipt (24%, n = 20). The findings reflect the limitations associated with a single exposure to varying maternal influenza

  8. Effect of disorder on condensation in the lattice gas model on a random graph.

    Science.gov (United States)

    Handford, Thomas P; Dear, Alexander; Pérez-Reche, Francisco J; Taraskin, Sergei N

    2014-07-01

    The lattice gas model of condensation in a heterogeneous pore system, represented by a random graph of cells, is studied using an exact analytical solution. A binary mixture of pore cells with different coordination numbers is shown to exhibit two phase transitions as a function of chemical potential in a certain temperature range. Heterogeneity in interaction strengths is demonstrated to reduce the critical temperature and, for large-enough degreeS of disorder, divides the cells into ones which are either on average occupied or unoccupied. Despite treating the pore space loops in a simplified manner, the random-graph model provides a good description of condensation in porous structures containing loops. This is illustrated by considering capillary condensation in a structural model of mesoporous silica SBA-15.

  9. A cost-utility analysis of risk model-guided versus physician's choice antiemetic prophylaxis in patients receiving chemotherapy for early-stage breast cancer: a net benefit regression approach.

    Science.gov (United States)

    Thavorn, Kednapa; Coyle, Doug; Hoch, Jeffrey S; Vandermeer, Lisa; Mazzarello, Sasha; Wang, Zhou; Dranitsaris, George; Fergusson, Dean; Clemons, Mark

    2017-08-01

    We assessed the cost-effectiveness of a risk model-guided (RMG) antiemetic prophylaxis strategy compared with the physician's choice (PC) strategy in patients receiving chemotherapy for early-stage breast cancer. We conducted a cost-utility analysis based on a published randomized controlled trial of 324 patients with early-stage breast cancer undergoing chemotherapy at two Canadian cancer centers. Patients were randomized to receive their antiemetic treatments according to either predefined risk scores or the treating physician's preference. Effectiveness was measured as quality-adjusted life years (QALYs) gained. Cost and utility data were obtained from the Canadian published literature. We used generalized estimating equations to estimate the incremental cost-effectiveness ratios (ICERs) and 95% confidence intervals (CIs) over a range of willingness-to-pay values. The lower and upper bounds of the 95% CIs were used to characterize the statistical uncertainty for the cost-effectiveness estimates and construct cost-effectiveness acceptability curves. From the health care system's perspective, the RMG strategy was associated with greater QALYs gained (0.0016, 95% CI 0.0009, 0.0022) and higher cost ($49.19, 95% CI $24.87, $73.08) than the PC strategy, resulting in an ICER of $30,864.28 (95% CI $14,718.98, $62,789.04). At the commonly used threshold of $50,000/QALY, the probability that RMG prophylaxis is cost-effective was >94%; this probability increased with greater willingness-to-pay values. The risk-guided antiemetic prophylaxis is an economically attractive option for patients receiving chemotherapy for early-stage breast cancer. This information supports the implementation of risk prediction models to guide chemotherapy-induced nausea and vomiting prophylaxis in clinical practices.

  10. Quantum randomness and unpredictability

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Gregg [Quantum Communication and Measurement Laboratory, Department of Electrical and Computer Engineering and Division of Natural Science and Mathematics, Boston University, Boston, MA (United States)

    2017-06-15

    Quantum mechanics is a physical theory supplying probabilities corresponding to expectation values for measurement outcomes. Indeed, its formalism can be constructed with measurement as a fundamental process, as was done by Schwinger, provided that individual measurements outcomes occur in a random way. The randomness appearing in quantum mechanics, as with other forms of randomness, has often been considered equivalent to a form of indeterminism. Here, it is argued that quantum randomness should instead be understood as a form of unpredictability because, amongst other things, indeterminism is not a necessary condition for randomness. For concreteness, an explication of the randomness of quantum mechanics as the unpredictability of quantum measurement outcomes is provided. Finally, it is shown how this view can be combined with the recently introduced view that the very appearance of individual quantum measurement outcomes can be grounded in the Plenitude principle of Leibniz, a principle variants of which have been utilized in physics by Dirac and Gell-Mann in relation to the fundamental processes. This move provides further support to Schwinger's ''symbolic'' derivation of quantum mechanics from measurement. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Pedestrian Walking Behavior Revealed through a Random Walk Model

    Directory of Open Access Journals (Sweden)

    Hui Xiong

    2012-01-01

    Full Text Available This paper applies method of continuous-time random walks for pedestrian flow simulation. In the model, pedestrians can walk forward or backward and turn left or right if there is no block. Velocities of pedestrian flow moving forward or diffusing are dominated by coefficients. The waiting time preceding each jump is assumed to follow an exponential distribution. To solve the model, a second-order two-dimensional partial differential equation, a high-order compact scheme with the alternating direction implicit method, is employed. In the numerical experiments, the walking domain of the first one is two-dimensional with two entrances and one exit, and that of the second one is two-dimensional with one entrance and one exit. The flows in both scenarios are one way. Numerical results show that the model can be used for pedestrian flow simulation.

  12. Statistical Shape Modelling and Markov Random Field Restoration (invited tutorial and exercise)

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen

    This tutorial focuses on statistical shape analysis using point distribution models (PDM) which is widely used in modelling biological shape variability over a set of annotated training data. Furthermore, Active Shape Models (ASM) and Active Appearance Models (AAM) are based on PDMs and have proven...... deformation field between shapes. The tutorial demonstrates both generative active shape and appearance models, and MRF restoration on 3D polygonized surfaces. ''Exercise: Spectral-Spatial classification of multivariate images'' From annotated training data this exercise applies spatial image restoration...... using Markov random field relaxation of a spectral classifier. Keywords: the Ising model, the Potts model, stochastic sampling, discriminant analysis, expectation maximization....

  13. Aggregate modeling of fast-acting demand response and control under real-time pricing

    International Nuclear Information System (INIS)

    Chassin, David P.; Rondeau, Daniel

    2016-01-01

    Highlights: • Demand elasticity for fast-acting demand response load under real-time pricing. • Validated first-principles logistic demand curve matches random utility model. • Logistic demand curve suitable for diversified aggregate loads market-based transactive control systems. - Abstract: This paper develops and assesses the performance of a short-term demand response (DR) model for utility load control with applications to resource planning and control design. Long term response models tend to underestimate short-term demand response when induced by prices. This has two important consequences. First, planning studies tend to undervalue DR and often overlook its benefits in utility demand management program development. Second, when DR is not overlooked, the open-loop DR control gain estimate may be too low. This can result in overuse of load resources, control instability and excessive price volatility. Our objective is therefore to develop a more accurate and better performing short-term demand response model. We construct the model from first principles about the nature of thermostatic load control and show that the resulting formulation corresponds exactly to the Random Utility Model employed in economics to study consumer choice. The model is tested against empirical data collected from field demonstration projects and is shown to perform better than alternative models commonly used to forecast demand in normal operating conditions. The results suggest that (1) existing utility tariffs appear to be inadequate to incentivize demand response, particularly in the presence of high renewables, and (2) existing load control systems run the risk of becoming unstable if utilities close the loop on real-time prices.

  14. Modeling and optimization of a utility system containing multiple extractions steam turbines

    International Nuclear Information System (INIS)

    Luo, Xianglong; Zhang, Bingjian; Chen, Ying; Mo, Songping

    2011-01-01

    Complex turbines with multiple controlled and/or uncontrolled extractions are popularly used in the processing industry and cogeneration plants to provide steam of different levels, electric power, and driving power. To characterize thermodynamic behavior under varying conditions, nonlinear mathematical models are developed based on energy balance, thermodynamic principles, and semi-empirical equations. First, the complex turbine is decomposed into several simple turbines from the controlled extraction stages and modeled in series. THM (The turbine hardware model) developing concept is applied to predict the isentropic efficiency of the decomposed simple turbines. Stodola's formulation is also used to simulate the uncontrolled extraction steam parameters. The thermodynamic properties of steam and water are regressed through linearization or piece-wise linearization. Second, comparison between the simulated results using the proposed model and the data in the working condition diagram provided by the manufacturer is conducted over a wide range of operations. The simulation results yield small deviation from the data in the working condition diagram where the maximum modeling error is 0.87% among the compared seven operation conditions. Last, the optimization model of a utility system containing multiple extraction turbines is established and a detailed case is analyzed. Compared with the conventional operation strategy, a maximum of 5.47% of the total operation cost is saved using the proposed optimization model. -- Highlights: → We develop a complete simulation model for steam turbine with multiple extractions. → We test the simulation model using the performance data of commercial turbines. → The simulation error of electric power generation is no more than 0.87%. → We establish a utility system operational optimization model. → The optimal industrial operation scheme featured with 5.47% of cost saving.

  15. Utility Function and Optimum Consumption in the models with Habit Formation and Catching up with the Joneses

    OpenAIRE

    Naryshkin, Roman; Davison, Matt

    2009-01-01

    This paper analyzes popular time-nonseparable utility functions that describe "habit formation" consumer preferences comparing current consumption with the time averaged past consumption of the same individual and "catching up with the Joneses" (CuJ) models comparing individual consumption with a cross-sectional average consumption level. Few of these models give reasonable optimum consumption time series. We introduce theoretically justified utility specifications leading to a plausible cons...

  16. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    Science.gov (United States)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  17. Random-field Potts model for the polar domains of lead magnesium niobate and lead scandium tantalate

    Energy Technology Data Exchange (ETDEWEB)

    Qian, H.; Bursill, L.A

    1997-06-01

    A random filed Potts model is used to establish the spatial relationship between the nanoscale distribution of charges chemical defects and nanoscale polar domains for the perovskite-based relaxor materials lead magnesium niobate (PMN) and lead scandium tantalate (PST). The random fields are not set stochastically but are determined initially by the distribution of B-site cations (Mg, Nb) or (Sc, Ta) generated by Monte Carlo NNNI-model simulations for the chemical defects. An appropriate random field Potts model is derived and algorithms developed for a 2D lattice. It is shown that the local fields are strongly correlated with the chemical domain walls and that polar domains as a function of decreasing temperature is simulated for the two cases of PMN and PST. The dynamics of the polar clusters is also discussed. 33 refs., 9 figs.

  18. Why environmental and resource economists should care about non-expected utility models

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, W. Douglass; Woodward, Richard T. [Department of Agricultural Economics, Texas A and M University (United States)

    2008-01-15

    Experimental and theoretical analysis has shown that the conventional expected utility (EU) and subjective expected utility (SEU) models, which are linear in probabilities, have serious limitations in certain situations. We argue here that these limitations are often highly relevant to the work that environmental and natural resource economists do. We discuss some of the experimental evidence and alternatives to the SEU. We consider the theory used, the problems studied, and the methods employed by resource economists. Finally, we highlight some recent work that has begun to use some of the alternatives to the EU and SEU frameworks and discuss areas where much future work is needed. (author)

  19. Random and non-random mating populations: Evolutionary dynamics in meiotic drive.

    Science.gov (United States)

    Sarkar, Bijan

    2016-01-01

    Game theoretic tools are utilized to analyze a one-locus continuous selection model of sex-specific meiotic drive by considering nonequivalence of the viabilities of reciprocal heterozygotes that might be noticed at an imprinted locus. The model draws attention to the role of viability selections of different types to examine the stable nature of polymorphic equilibrium. A bridge between population genetics and evolutionary game theory has been built up by applying the concept of the Fundamental Theorem of Natural Selection. In addition to pointing out the influences of male and female segregation ratios on selection, configuration structure reveals some noted results, e.g., Hardy-Weinberg frequencies hold in replicator dynamics, occurrence of faster evolution at the maximized variance fitness, existence of mixed Evolutionarily Stable Strategy (ESS) in asymmetric games, the tending evolution to follow not only a 1:1 sex ratio but also a 1:1 different alleles ratio at particular gene locus. Through construction of replicator dynamics in the group selection framework, our selection model introduces a redefining bases of game theory to incorporate non-random mating where a mating parameter associated with population structure is dependent on the social structure. Also, the model exposes the fact that the number of polymorphic equilibria will depend on the algebraic expression of population structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. The utility of comparative models and the local model quality for protein crystal structure determination by Molecular Replacement

    Directory of Open Access Journals (Sweden)

    Pawlowski Marcin

    2012-11-01

    Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.

  1. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  2. Statistical Downscaling of Temperature with the Random Forest Model

    Directory of Open Access Journals (Sweden)

    Bo Pang

    2017-01-01

    Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.

  3. ASPEN+ and economic modeling of equine waste utilization for localized hot water heating via fast pyrolysis

    Science.gov (United States)

    ASPEN Plus based simulation models have been developed to design a pyrolysis process for the on-site production and utilization of pyrolysis oil from equine waste at the Equine Rehabilitation Center at Morrisville State College (MSC). The results indicate that utilization of all available Equine Reh...

  4. A Bayesian Analysis of a Random Effects Small Business Loan Credit Scoring Model

    Directory of Open Access Journals (Sweden)

    Patrick J. Farrell

    2011-09-01

    Full Text Available One of the most important aspects of credit scoring is constructing a model that has low misclassification rates and is also flexible enough to allow for random variation. It is also well known that, when there are a large number of highly correlated variables as is typical in studies involving questionnaire data, a method must be found to reduce the number of variables to those that have high predictive power. Here we propose a Bayesian multivariate logistic regression model with both fixed and random effects for small business loan credit scoring and a variable reduction method using Bayes factors. The method is illustrated on an interesting data set based on questionnaires sent to loan officers in Canadian banks and venture capital companies

  5. Theoretical model of the density of states of random binary alloys

    International Nuclear Information System (INIS)

    Zekri, N.; Brezini, A.

    1991-09-01

    A theoretical formulation of the density of states for random binary alloys is examined based on a mean field treatment. The present model includes both diagonal and off-diagonal disorder and also short-range order. Extensive results are reported for various concentrations and compared to other calculations. (author). 22 refs, 6 figs

  6. Utility Estimation for Pediatric Vesicoureteral Reflux: Methodological Considerations Using an Online Survey Platform.

    Science.gov (United States)

    Tejwani, Rohit; Wang, Hsin-Hsiao S; Lloyd, Jessica C; Kokorowski, Paul J; Nelson, Caleb P; Routh, Jonathan C

    2017-03-01

    The advent of online task distribution has opened a new avenue for efficiently gathering community perspectives needed for utility estimation. Methodological consensus for estimating pediatric utilities is lacking, with disagreement over whom to sample, what perspective to use (patient vs parent) and whether instrument induced anchoring bias is significant. We evaluated what methodological factors potentially impact utility estimates for vesicoureteral reflux. Cross-sectional surveys using a time trade-off instrument were conducted via the Amazon Mechanical Turk® (https://www.mturk.com) online interface. Respondents were randomized to answer questions from child, parent or dyad perspectives on the utility of a vesicoureteral reflux health state and 1 of 3 "warm-up" scenarios (paralysis, common cold, none) before a vesicoureteral reflux scenario. Utility estimates and potential predictors were fitted to a generalized linear model to determine what factors most impacted utilities. A total of 1,627 responses were obtained. Mean respondent age was 34.9 years. Of the respondents 48% were female, 38% were married and 44% had children. Utility values were uninfluenced by child/personal vesicoureteral reflux/urinary tract infection history, income or race. Utilities were affected by perspective and were higher in the child group (34% lower in parent vs child, p pediatric conditions. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Probabilistic model of random uncertainties in structural dynamics for mis-tuned bladed disks; Modele probabiliste des incertitudes en dynamique des structures pour le desaccordage des roues aubagees

    Energy Technology Data Exchange (ETDEWEB)

    Capiez-Lernout, E.; Soize, Ch. [Universite de Marne la Vallee, Lab. de Mecanique, 77 (France)

    2003-10-01

    The mis-tuning of blades is frequently the cause of spatial localizations for the dynamic forced response in turbomachinery industry. The random character of mis-tuning requires the construction of probabilistic models of random uncertainties. A usual parametric probabilistic description considers the mis-tuning through the Young modulus of each blade. This model consists in mis-tuning blade eigenfrequencies, assuming the blade modal shapes unchanged. Recently a new approach known as a non-parametric model of random uncertainties has been introduced for modelling random uncertainties in elasto-dynamics. This paper proposes the construction of a non-parametric model which is coherent with all the uncertainties which characterize mis-tuning. As mis-tuning is a phenomenon which is independent from one blade to another one, the structure is considered as an assemblage of substructures. The mean reduced matrix model required by the non-parametric approach is thus constructed by dynamic sub-structuring. A comparative approach is also needed to study the influence of the non-parametric approach for a usual parametric model adapted to mis-tuning. A numerical example is presented. (authors)

  8. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    Science.gov (United States)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  9. Random number generation

    International Nuclear Information System (INIS)

    Coveyou, R.R.

    1974-01-01

    The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)

  10. Modeling and design of light powered biomimicry micropump utilizing transporter proteins

    Science.gov (United States)

    Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta

    2014-11-01

    The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.

  11. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  12. Perceptions of randomized security schedules.

    Science.gov (United States)

    Scurich, Nicholas; John, Richard S

    2014-04-01

    Security of infrastructure is a major concern. Traditional security schedules are unable to provide omnipresent coverage; consequently, adversaries can exploit predictable vulnerabilities to their advantage. Randomized security schedules, which randomly deploy security measures, overcome these limitations, but public perceptions of such schedules have not been examined. In this experiment, participants were asked to make a choice between attending a venue that employed a traditional (i.e., search everyone) or a random (i.e., a probability of being searched) security schedule. The absolute probability of detecting contraband was manipulated (i.e., 1/10, 1/4, 1/2) but equivalent between the two schedule types. In general, participants were indifferent to either security schedule, regardless of the probability of detection. The randomized schedule was deemed more convenient, but the traditional schedule was considered fairer and safer. There were no differences between traditional and random schedule in terms of perceived effectiveness or deterrence. Policy implications for the implementation and utilization of randomized schedules are discussed. © 2013 Society for Risk Analysis.

  13. SHER: A Colored Petri Net Based Random Mobility Model for Wireless Communications

    Science.gov (United States)

    Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal

    2015-01-01

    In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the

  14. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  15. A new crack growth model for life prediction under random loading

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Chen, Zhi Wei

    1999-01-01

    The load interaction effect in variable amplitude fatigue test is a very important issue for correctly predicting fatigue life. Some prediction methods for retardation are reviewed and the problems discussed. The so-called 'under-load' effect is also of importance for a prediction model to work properly under random load spectrum. A new model that is simple in form but combines overload plastic zone and residual stress considerations together with Elber's closure concept is proposed to fully take account of the load-interaction effects including both over-load and under-load effects. Applying this new model to complex load sequence is explored here. Simulations of tests show the improvement of the new model over other models. The best prediction (mostly closely resembling test curve) is given by the newly proposed Chen-Lee model

  16. The covariance matrix of the Potts model: A random cluster analysis

    International Nuclear Information System (INIS)

    Borgs, C.; Chayes, J.T.

    1996-01-01

    We consider the covariance matrix, G mn = q 2 x ,m); δ(σ y ,n)>, of the d-dimensional q-states Potts model, rewriting it in the random cluster representation of Fortuin and Kasteleyn. In many of the q ordered phases, we identify the eigenvalues of this matrix both in terms of representations of the unbroken symmetry group of the model and in terms of random cluster connectivities and covariances, thereby attributing algebraic significance to these stochastic geometric quantities. We also show that the correlation length and the correlation length corresponding to the decay rate of one on the eigenvalues in the same as the inverse decay rate of the diameter of finite clusters. For dimension of d=2, we show that this correlation length and the correlation length of two-point function with free boundary conditions at the corresponding dual temperature are equal up to a factor of two. For systems with first-order transitions, this relation helps to resolve certain inconsistencies between recent exact and numerical work on correlation lengths at the self-dual point β o . For systems with second order transitions, this relation implies the equality of the correlation length exponents from above below threshold, as well as an amplitude ratio of two. In the course of proving the above results, we establish several properties of independent interest, including left continuity of the inverse correlation length with free boundary conditions and upper semicontinuity of the decay rate for finite clusters in all dimensions, and left continuity of the two-dimensional free boundary condition percolation probability at β o . We also introduce DLR equations for the random cluster model and use them to establish ergodicity of the free measure. In order to prove these results, we introduce a new class of events which we call decoupling events and two inequalities for these events

  17. Exponential random graph models for networks with community structure.

    Science.gov (United States)

    Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian

    2013-09-01

    Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.

  18. Automatic structure classification of small proteins using random forest

    Directory of Open Access Journals (Sweden)

    Hirst Jonathan D

    2010-07-01

    Full Text Available Abstract Background Random forest, an ensemble based supervised machine learning algorithm, is used to predict the SCOP structural classification for a target structure, based on the similarity of its structural descriptors to those of a template structure with an equal number of secondary structure elements (SSEs. An initial assessment of random forest is carried out for domains consisting of three SSEs. The usability of random forest in classifying larger domains is demonstrated by applying it to domains consisting of four, five and six SSEs. Results Random forest, trained on SCOP version 1.69, achieves a predictive accuracy of up to 94% on an independent and non-overlapping test set derived from SCOP version 1.73. For classification to the SCOP Class, Fold, Super-family or Family levels, the predictive quality of the model in terms of Matthew's correlation coefficient (MCC ranged from 0.61 to 0.83. As the number of constituent SSEs increases the MCC for classification to different structural levels decreases. Conclusions The utility of random forest in classifying domains from the place-holder classes of SCOP to the true Class, Fold, Super-family or Family levels is demonstrated. Issues such as introduction of a new structural level in SCOP and the merger of singleton levels can also be addressed using random forest. A real-world scenario is mimicked by predicting the classification for those protein structures from the PDB, which are yet to be assigned to the SCOP classification hierarchy.

  19. Random regret minimization : Exploration of a new choice model for environmental and resource economics

    NARCIS (Netherlands)

    Thiene, M.; Boeri, M.; Chorus, C.G.

    2011-01-01

    This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the

  20. The limiting behavior of the estimated parameters in a misspecified random field regression model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Qin, Yu

    This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection of n...

  1. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Science.gov (United States)

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  2. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    Science.gov (United States)

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  3. Estimation of utility values from visual analog scale measures of health in patients undergoing cardiac surgery

    Directory of Open Access Journals (Sweden)

    Oddershede L

    2014-01-01

    Full Text Available Lars Oddershede,1,2 Jan Jesper Andreasen,1 Lars Ehlers2 1Department of Cardiothoracic Surgery, Center for Cardiovascular Research, Aalborg University Hospital, Aalborg, Denmark; 2Danish Center for Healthcare Improvements, Faculty of Social Sciences and Faculty of Health Sciences, Aalborg University, Aalborg East, Denmark Introduction: In health economic evaluations, mapping can be used to estimate utility values from other health outcomes in order to calculate quality adjusted life-years. Currently, no methods exist to map visual analog scale (VAS scores to utility values. This study aimed to develop and propose a statistical algorithm for mapping five dimensions of health, measured on VASs, to utility scores in patients suffering from cardiovascular disease. Methods: Patients undergoing coronary artery bypass grafting at Aalborg University Hospital in Denmark were asked to score their health using the five VAS items (mobility, self-care, ability to perform usual activities, pain, and presence of anxiety or depression and the EuroQol 5 Dimensions questionnaire. Regression analysis was used to estimate four mapping models from patients' age, sex, and the self-reported VAS scores. Prediction errors were compared between mapping models and on subsets of the observed utility scores. Agreement between predicted and observed values was assessed using Bland–Altman plots. Results: Random effects generalized least squares (GLS regression yielded the best results when quadratic terms of VAS scores were included. Mapping models fitted using the Tobit model and censored least absolute deviation regression did not appear superior to GLS regression. The mapping models were able to explain approximately 63%–65% of the variation in the observed utility scores. The mean absolute error of predictions increased as the observed utility values decreased. Conclusion: We concluded that it was possible to predict utility scores from VAS scores of the five

  4. Circumferential fusion is dominant over posterolateral fusion in a long-term perspective: cost-utility evaluation of a randomized controlled trial in severe, chronic low back pain

    DEFF Research Database (Denmark)

    Soegaard, Rikke; Bünger, Cody E; Christiansen, Terkel

    2007-01-01

    STUDY DESIGN: Cost-utility evaluation of a randomized, controlled trial with a 4- to 8-year follow-up. OBJECTIVE: To investigate the incremental cost per quality-adjusted-life-year (QALY) when comparing circumferential fusion to posterolateral fusion in a long-term, societal perspective. SUMMARY...... OF BACKGROUND DATA: The cost-effectiveness of circumferential fusion in a long-term perspective is uncertain but nonetheless highly relevant as the ISSLS prize winner 2006 in clinical studies reported the effect of circumferential fusion superior to the effect of posterolateral fusion. A recent trial found...... no significant difference between posterolateral and circumferential fusion reporting cost-effectiveness from a 2-year viewpoint. METHODS: A total of 146 patients were randomized to posterolateral or circumferential fusion and followed 4 to 8 years after surgery. The mean age of the cohort was 46 years (range...

  5. Random crystal field effects on the integer and half-integer mixed-spin system

    Science.gov (United States)

    Yigit, Ali; Albayrak, Erhan

    2018-05-01

    In this work, we have focused on the random crystal field effects on the phase diagrams of the mixed spin-1 and spin-5/2 Ising system obtained by utilizing the exact recursion relations (ERR) on the Bethe lattice (BL). The distribution function P(Di) = pδ [Di - D(1 + α) ] +(1 - p) δ [Di - D(1 - α) ] is used to randomize the crystal field.The phase diagrams are found to exhibit second- and first-order phase transitions depending on the values of α, D and p. It is also observed that the model displays tricritical point, isolated point, critical end point and three compensation temperatures for suitable values of the system parameters.

  6. Modeling individual differences in randomized experiments using growth models: Recommendations for design, statistical analysis and reporting of results of internet interventions

    Directory of Open Access Journals (Sweden)

    Hugo Hesser

    2015-05-01

    Full Text Available Growth models (also known as linear mixed effects models, multilevel models, and random coefficients models have the capability of studying change at the group as well as the individual level. In addition, these methods have documented advantages over traditional data analytic approaches in the analysis of repeated-measures data. These advantages include, but are not limited to, the ability to incorporate time-varying predictors, handle dependence among repeated observations in a very flexible manner, and to provide accurate estimates with missing data under fairly unrestrictive missing data assumptions. The flexibility of the growth curve modeling approach to the analysis of change makes it the preferred choice in the evaluation of direct, indirect and moderated intervention effects. Although offering many benefits, growth models present challenges in terms of design, analysis and reporting of results. This paper provides a nontechnical overview of growth models in the analysis of change in randomized experiments and advocates for their use in the field of internet interventions. Practical recommendations for design, analysis and reporting of results from growth models are provided.

  7. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  8. Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-03

    Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.

  9. Insights from random vibration analyses using multiple earthquake components

    International Nuclear Information System (INIS)

    DebChaudhury, A.; Gasparini, D.A.

    1981-01-01

    The behavior of multi-degree-of-freedom systems subjected to multiple earthquake components is studied by the use of random vibration dynamic analyses. A linear system which has been decoupled into modes and has both translational and rotational degrees of freedom is analyzed. The seismic excitation is modelled as a correlated or uncorrelated, vector-valued, non-stationary random process having a Kanai-Tajimi type of frequency content. Non-stationarity is achieved by using a piece wise linear strength function. Therefore, almost any type of evolution and decay of an earthquake may be modelled. Also, in general, the components of the excitation have different frequency contents and strength functions; i.e. intensities and durations and the correlations between components can vary with time. A state-space, modal, random vibration approach is used. Exact analytical expressions for both the state transition matrix and the evolutionary modal covariance matrix are utilized to compute time histories of modal RMS responses. Desired responses are then computed by modal superposition. Specifically, relative displacement, relative velocity and absolute acceleration responses are studied. An important advantage of such analyses is that RMS responses vary smoothly in time therefore large time intervals may be used to generate response time histories. The modal superposition is exact; that is, all cross correlation terms between modal responses are included. (orig./RW)

  10. An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2011-09-01

    Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.

  11. Geometric Models for Isotropic Random Porous Media: A Review

    Directory of Open Access Journals (Sweden)

    Helmut Hermann

    2014-01-01

    Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.

  12. Evaluation model of wind energy resources and utilization efficiency of wind farm

    Science.gov (United States)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  13. Performance, Nutrient Utilization and Intestinal Environment of ...

    African Journals Online (AJOL)

    The performance, nutrient utilization and intestinal environment of weaned rabbits fed diets supplemented with organic acids (acetic acid, citric acid and formic acid) were investigated with 24 (6-week old) rabbits in a completely randomized design. The control diet was not supplemented while others were supplemented ...

  14. Utilization of cocoyam production technologies among women ...

    African Journals Online (AJOL)

    The study analysed utilization of improved cocoyam production technologies among women in Abia State, Nigeria. A multistage random sampling technique was used to select sixty (60) women. Data for the study were collected using a structured questionnaire and analysed with descriptive statistics and inferential statistics ...

  15. A random matrix model for elliptic curve L-functions of finite conductor

    International Nuclear Information System (INIS)

    Dueñez, E; Huynh, D K; Keating, J P; Snaith, N C; Miller, S J

    2012-01-01

    We propose a random-matrix model for families of elliptic curve L-functions of finite conductor. A repulsion of the critical zeros of these L-functions away from the centre of the critical strip was observed numerically by Miller (2006 Exp. Math. 15 257–79); such behaviour deviates qualitatively from the conjectural limiting distribution of the zeros (for large conductors this distribution is expected to approach the one-level density of eigenvalues of orthogonal matrices after appropriate rescaling). Our purpose here is to provide a random-matrix model for Miller’s surprising discovery. We consider the family of even quadratic twists of a given elliptic curve. The main ingredient in our model is a calculation of the eigenvalue distribution of random orthogonal matrices whose characteristic polynomials are larger than some given value at the symmetry point in the spectra. We call this sub-ensemble of SO(2N) the excised orthogonal ensemble. The sieving-off of matrices with small values of the characteristic polynomial is akin to the discretization of the central values of L-functions implied by the formulae of Waldspurger and Kohnen–Zagier. The cut-off scale appropriate to modelling elliptic curve L-functions is exponentially small relative to the matrix size N. The one-level density of the excised ensemble can be expressed in terms of that of the well-known Jacobi ensemble, enabling the former to be explicitly calculated. It exhibits an exponentially small (on the scale of the mean spacing) hard gap determined by the cut-off value, followed by soft repulsion on a much larger scale. Neither of these features is present in the one-level density of SO(2N). When N → ∞ we recover the limiting orthogonal behaviour. Our results agree qualitatively with Miller’s discrepancy. Choosing the cut-off appropriately gives a model in good quantitative agreement with the number-theoretical data. (paper)

  16. Percolation for a model of statistically inhomogeneous random media

    International Nuclear Information System (INIS)

    Quintanilla, J.; Torquato, S.

    1999-01-01

    We study clustering and percolation phenomena for a model of statistically inhomogeneous two-phase random media, including functionally graded materials. This model consists of inhomogeneous fully penetrable (Poisson distributed) disks and can be constructed for any specified variation of volume fraction. We quantify the transition zone in the model, defined by the frontier of the cluster of disks which are connected to the disk-covered portion of the model, by defining the coastline function and correlation functions for the coastline. We find that the behavior of these functions becomes largely independent of the specific choice of grade in volume fraction as the separation of length scales becomes large. We also show that the correlation function behaves in a manner similar to that of fractal Brownian motion. Finally, we study fractal characteristics of the frontier itself and compare to similar properties for two-dimensional percolation on a lattice. In particular, we show that the average location of the frontier appears to be related to the percolation threshold for homogeneous fully penetrable disks. copyright 1999 American Institute of Physics

  17. Universality in random-walk models with birth and death

    International Nuclear Information System (INIS)

    Bender, C.M.; Boettcher, S.; Meisinger, P.N.

    1995-01-01

    Models of random walks are considered in which walkers are born at one site and die at all other sites. Steady-state distributions of walkers exhibit dimensionally dependent critical behavior as a function of the birth rate. Exact analytical results for a hyperspherical lattice yield a second-order phase transition with a nontrivial critical exponent for all positive dimensions D≠2, 4. Numerical studies of hypercubic and fractal lattices indicate that these exact results are universal. This work elucidates the adsorption transition of polymers at curved interfaces. copyright 1995 The American Physical Society

  18. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  19. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    International Nuclear Information System (INIS)

    Vamos, Calin; Suciu, Nicolae; Vereecken, Harry

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested

  20. Animal models of GM2 gangliosidosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Lawson CA

    2016-07-01

    Full Text Available Cheryl A Lawson,1,2 Douglas R Martin2,3 1Department of Pathobiology, 2Scott-Ritchey Research Center, 3Department of Anatomy, Physiology and Pharmacology, Auburn University College of Veterinary Medicine, Auburn, AL, USA Abstract: GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. Keywords: GM2 gangliosidosis, Tay–Sachs disease, Sandhoff disease, lysosomal storage disorder, sphingolipidosis, brain disease

  1. GLOBAL RANDOM WALK SIMULATIONS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS OF PASSIVE TRANSPORT MODELS

    Directory of Open Access Journals (Sweden)

    Nicolae Suciu

    2011-07-01

    Full Text Available The Global Random Walk algorithm (GRW performs a simultaneoustracking on a fixed grid of huge numbers of particles at costscomparable to those of a single-trajectory simulation by the traditional Particle Tracking (PT approach. Statistical ensembles of GRW simulations of a typical advection-dispersion process in groundwater systems with randomly distributed spatial parameters are used to obtain reliable estimations of the input parameters for the upscaled transport model and of their correlations, input-output correlations, as well as full probability distributions of the input and output parameters.

  2. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  3. Individual and contextual factors influencing dental health care utilization by preschool children: a multilevel analysis

    Science.gov (United States)

    Piovesan, Chaiana; Ardenghi, Thiago Machado; Mendes, Fausto Medeiros; Agostini, Bernardo Antonio; Michel-Crosato, Edgard

    2017-03-30

    The effect of contextual factors on dental care utilization was evaluated after adjustment for individual characteristics of Brazilian preschool children. This cross-sectional study assessed 639 preschool children aged 1 to 5 years from Santa Maria, a town in Rio Grande do Sul State, located in southern Brazil. Participants were randomly selected from children attending the National Children's Vaccination Day and 15 health centers were selected for this research. Visual examinations followed the ICDAS criteria. Parents answered a questionnaire about demographic and socioeconomic characteristics. Contextual influences on children's dental care utilization were obtained from two community-related variables: presence of dentists and presence of workers' associations in the neighborhood. Unadjusted and adjusted multilevel logistic regression models were used to describe the association between outcome and predictor variables. A prevalence of 21.6% was found for regular use of dental services. The unadjusted assessment of the associations of dental health care utilization with individual and contextual factors included children's ages, family income, parents' schooling, mothers' participation in their children's school activities, dental caries, and presence of workers' associations in the neighborhood as the main outcome covariates. Individual variables remained associated with the outcome after adding contextual variables in the model. In conclusion, individual and contextual variables were associated with dental health care utilization by preschool children.

  4. On the utility of land surface models for agricultural drought monitoring

    Directory of Open Access Journals (Sweden)

    W. T. Crow

    2012-09-01

    Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.

  5. Models of disordered media: some new results, including some new connections between composite-media, fluid-state, and random-flight theories

    International Nuclear Information System (INIS)

    Stell, G.

    1983-01-01

    Some new theoretical results on the microstructure of models of two-phase disordered media are given, as well as the new quantitative bounds on the thermal conductivity that follows for one such model (randomly centered spherical inclusions). A second set of results is then given for random flights, including random flights with hit expectancy prescribed in a unit hall around the flight origin. Finally, some interesting correspondences are demonstrated, via the Ornstein-Zernike equation, between random-flight results, liquid-state results and percolation-theory results. 27 references, 6 figures, 4 tables

  6. Utilizing the PREPaRE Model When Multiple Classrooms Witness a Traumatic Event

    Science.gov (United States)

    Bernard, Lisa J.; Rittle, Carrie; Roberts, Kathy

    2011-01-01

    This article presents an account of how the Charleston County School District responded to an event by utilizing the PREPaRE model (Brock, et al., 2009). The acronym, PREPaRE, refers to a range of crisis response activities: P (prevent and prepare for psychological trauma), R (reaffirm physical health and perceptions of security and safety), E…

  7. Utility of Social Modeling for Proliferation Assessment - Enhancing a Facility-Level Model for Proliferation Resistance Assessment of a Nuclear Enegry System

    Energy Technology Data Exchange (ETDEWEB)

    Coles, Garill A.; Brothers, Alan J.; Gastelum, Zoe N.; Olson, Jarrod; Thompson, Sandra E.

    2009-10-26

    The Utility of Social Modeling for Proliferation Assessment project (PL09-UtilSocial) investigates the use of social and cultural information to improve nuclear proliferation assessments, including nonproliferation assessments, Proliferation Resistance (PR) assessments, safeguards assessments, and other related studies. These assessments often use and create technical information about a host State and its posture towards proliferation, the vulnerability of a nuclear energy system (NES) to an undesired event, and the effectiveness of safeguards. This objective of this project is to find and integrate social and technical information by explicitly considering the role of cultural, social, and behavioral factors relevant to proliferation; and to describe and demonstrate if and how social science modeling has utility in proliferation assessment. This report describes a modeling approach and how it might be used to support a location-specific assessment of the PR assessment of a particular NES. The report demonstrates the use of social modeling to enhance an existing assessment process that relies on primarily technical factors. This effort builds on a literature review and preliminary assessment performed as the first stage of the project and compiled in PNNL-18438. [ T his report describes an effort to answer questions about whether it is possible to incorporate social modeling into a PR assessment in such a way that we can determine the effects of social factors on a primarily technical assessment. This report provides: 1. background information about relevant social factors literature; 2. background information about a particular PR assessment approach relevant to this particular demonstration; 3. a discussion of social modeling undertaken to find and characterize social factors that are relevant to the PR assessment of a nuclear facility in a specific location; 4. description of an enhancement concept that integrates social factors into an existing, technically

  8. Managing Information Uncertainty in Wave Height Modeling for the Offshore Structural Analysis through Random Set

    Directory of Open Access Journals (Sweden)

    Keqin Yan

    2017-01-01

    Full Text Available This chapter presents a reliability study for an offshore jacket structure with emphasis on the features of nonconventional modeling. Firstly, a random set model is formulated for modeling the random waves in an ocean site. Then, a jacket structure is investigated in a pushover analysis to identify the critical wave direction and key structural elements. This is based on the ultimate base shear strength. The selected probabilistic models are adopted for the important structural members and the wave direction is specified in the weakest direction of the structure for a conservative safety analysis. The wave height model is processed in a P-box format when it is used in the numerical analysis. The models are applied to find the bounds of the failure probabilities for the jacket structure. The propagation of this wave model to the uncertainty in results is investigated in both an interval analysis and Monte Carlo simulation. The results are compared in context of information content and numerical accuracy. Further, the failure probability bounds are compared with the conventional probabilistic approach.

  9. Cost-utility of laparoscopic Nissen fundoplication versus proton pump inhibitors for chronic and controlled gastroesophageal reflux disease: a 3-year prospective randomized controlled trial and economic evaluation.

    Science.gov (United States)

    Goeree, Ron; Hopkins, Rob; Marshall, John K; Armstrong, David; Ungar, Wendy J; Goldsmith, Charles; Allen, Christopher; Anvari, Mehran

    2011-01-01

    Very few randomized controlled trials (RCTs) have compared laparoscopic Nissen fundoplication (LNF) to proton pump inhibitors (PPI) medical management for patients with chronic gastroesophageal reflux disease (GERD). Larger RCTs have been relatively short in duration, and have reported mixed results regarding symptom control and effect on quality of life (QOL). Economic evaluations have reported conflicting results. To determine the incremental cost-utility of LNF versus PPI for treating patients with chronic and controlled GERD over 3 years from the societal perspective. Economic evaluation was conducted alongside a RCT that enrolled 104 patients from October 2000 to September 2004. Primary study outcome was GERD symptoms (secondary outcomes included QOL and cost-utility). Resource utilization and QOL data collected at regular follow-up intervals determined incremental cost/QALY gained. Stochastic uncertainty was assessed using bootstrapping and methodologic assumptions were assessed using sensitivity analysis. No statistically significant differences in GERD symptom scores, but LNF did result in fewer heartburn days and improved QOL. Costs were higher for LNF patients by $3205/patient over 3 years but QOL was also higher as measured by either QOL instrument. Based on total costs, incremental cost-utility of LNF was $29,404/QALY gained using the Health Utility Index 3. Cost-utility results were sensitive to the utility instrument used ($29,404/QALY for Health Utility Index 3, $31,117/QALY for the Short Form 6D, and $76,310/QALY for EuroQol 5D) and if current lower prices for PPIs were used in the analysis. Results varied depending on resource use/costs included in the analysis, the QOL instrument used, and the cost of PPIs; however, LNF was generally found to be a cost-effective treatment for patients with symptomatic controlled GERD requiring long-term management. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR

  10. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  11. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  12. Pólya number and first return of bursty random walk: Rigorous solutions

    Science.gov (United States)

    Wan, J.; Xu, X. P.

    2012-03-01

    The recurrence properties of random walks can be characterized by Pólya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we investigate Pólya number and first return for bursty random walk on a line, in which the walk has different step size and moving probabilities. Using the concept of the Catalan number, we obtain exact results for first return probability, the average first return time and Pólya number for the first time. We show that Pólya number displays two different functional behavior when the walk deviates from the recurrent point. By utilizing the Lagrange inversion formula, we interpret our findings by transferring Pólya number to the closed-form solutions of an inverse function. We also calculate Pólya number using another approach, which corroborates our results and conclusions. Finally, we consider the recurrence properties and Pólya number of two variations of the bursty random walk model.

  13. Utilizing the non-bridge oxygen model to predict the glass viscosity

    International Nuclear Information System (INIS)

    Choi, Kwansik; Sheng, Jiawei; Maeng, Sung Jun; Song, Myung Jae

    1998-01-01

    Viscosity is the most important process property of waste glass. Viscosity measurement is difficult and costs much. Non-bridging Oxygen (NBO) model which relates glass composition to viscosity had been developed for high level waste at the Savannah River Site (SRS). This research utilized this NBO model to predict the viscosity of KEPRI's 55 glasses. It was found that there was a linear relationship between the measured viscosity and the predicted viscosity. The NBO model could be used to predict glass viscosity in glass formulation development. However the precision of predicted viscosity is out of satisfaction because the composition ranges are very different between the SRS and KEPRI glasses. The modification of NBO calculation, which included modification of alkaline earth elements and TiO 2 , could not strikingly improve the precision of predicted values

  14. Many multicenter trials had few events per center, requiring analysis via random-effects models or GEEs.

    Science.gov (United States)

    Kahan, Brennan C; Harhay, Michael O

    2015-12-01

    Adjustment for center in multicenter trials is recommended when there are between-center differences or when randomization has been stratified by center. However, common methods of analysis (such as fixed-effects, Mantel-Haenszel, or stratified Cox models) often require a large number of patients or events per center to perform well. We reviewed 206 multicenter randomized trials published in four general medical journals to assess the average number of patients and events per center and determine whether appropriate methods of analysis were used in trials with few patients or events per center. The median number of events per center/treatment arm combination for trials using a binary or survival outcome was 3 (interquartile range, 1-10). Sixteen percent of trials had less than 1 event per center/treatment combination, 50% fewer than 3, and 63% fewer than 5. Of the trials which adjusted for center using a method of analysis which requires a large number of events per center, 6% had less than 1 event per center-treatment combination, 25% fewer than 3, and 50% fewer than 5. Methods of analysis that allow for few events per center, such as random-effects models or generalized estimating equations (GEEs), were rarely used. Many multicenter trials contain few events per center. Adjustment for center using random-effects models or GEE with model-based (non-robust) standard errors may be beneficial in these scenarios. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  16. On the Path to SunShot. Utility Regulatory and Business Model Reforms for Addressing the Financial Impacts of Distributed Solar on Utilities

    Energy Technology Data Exchange (ETDEWEB)

    Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.

  17. Analysis, Simulation and Prediction of Multivariate Random Fields with Package RandomFields

    Directory of Open Access Journals (Sweden)

    Martin Schlather

    2015-02-01

    Full Text Available Modeling of and inference on multivariate data that have been measured in space, such as temperature and pressure, are challenging tasks in environmental sciences, physics and materials science. We give an overview over and some background on modeling with cross- covariance models. The R package RandomFields supports the simulation, the parameter estimation and the prediction in particular for the linear model of coregionalization, the multivariate Matrn models, the delay model, and a spectrum of physically motivated vector valued models. An example on weather data is considered, illustrating the use of RandomFields for parameter estimation and prediction.

  18. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  19. Modeling random telegraph signal noise in CMOS image sensor under low light based on binomial distribution

    International Nuclear Information System (INIS)

    Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao

    2016-01-01

    The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)

  20. Collocation methods for uncertainty quanti cation in PDE models with random data

    KAUST Repository

    Nobile, Fabio

    2014-01-06

    In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be over-constraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

  1. Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    International Nuclear Information System (INIS)

    Fyodorov, Yan V; Bouchaud, Jean-Philippe

    2008-01-01

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)

  2. Freezing and extreme-value statistics in a random energy model with logarithmically correlated potential

    Energy Technology Data Exchange (ETDEWEB)

    Fyodorov, Yan V [School of Mathematical Sciences, University of Nottingham, Nottingham NG72RD (United Kingdom); Bouchaud, Jean-Philippe [Science and Finance, Capital Fund Management 6-8 Bd Haussmann, 75009 Paris (France)

    2008-09-19

    We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)

  3. Comparative study of random and uniform models for the distribution of TRISO particles in HTR-10 fuel elements

    International Nuclear Information System (INIS)

    Rosales, J.; Perez, J.; Garcia, C.; Munnoz, A.; Lira, C. A. B. O.

    2015-01-01

    TRISO particles are the specific features of HTR-10 and generally HTGR reactors. Their heterogeneity and random arrangement in graphite matrix of these reactors create a significant modeling challenge. In the simulation of spherical fuel elements using MCNPX are usually created repetitive structures using uniform distribution models. The use of these repetitive structures introduces two major approaches: the non-randomness of the TRISO particles inside the pebbles and the intersection of the pebble surface with the TRISO particles. These approaches could affect significantly the multiplicative properties of the core. In order to study the influence of these approaches in the multiplicative properties was estimated the K inf value in one pebble with white boundary conditions using 4 different configurations regarding the distribution of the TRISO particles inside the pebble: uniform hexagonal model, cubic uniform model, cubic uniform without the effect of cutting and a random distribution model. It was studied the impact these models on core scale solving the problem B1, from the Benchmark Problems presented in a Coordinated Research Program of the IAEA. (Author)

  4. Managing the Newsvendor Modeled Product System with Random Capacity and Capacity-Dependent Price

    Directory of Open Access Journals (Sweden)

    Qingying Li

    2015-01-01

    Full Text Available We consider a newsvendor modeled product system, where the firm provides products to the market. The supply capacity of the product is random, so the firm receives either the amount of order quantity or the realized capacity, whichever is smaller. The market price is capacity dependent. We consider two types of production cost structures: the procurement case and the in-house production case. The firm pays for the received quantity in the former case and for the ordered quantity in the latter case. We obtain the optimal order quantities for both cases. Comparing with the traditional newsvendor model, we find that the optimal order quantity in both the procurement case and the in-house production case are no greater than that in the traditional newsvendor model with a fixed selling price. We also find that the optimal order quantity for the procurement case is greater than that for the in-house production case. Numerical study is conducted to investigate the sensitivity of the optimal solution versus the distribution of the random capacity/demand.

  5. Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines

    NARCIS (Netherlands)

    Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred

    2017-01-01

    This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades

  6. Impact of Predicting Health Care Utilization Via Web Search Behavior: A Data-Driven Analysis.

    Science.gov (United States)

    Agarwal, Vibhu; Zhang, Liangliang; Zhu, Josh; Fang, Shiyuan; Cheng, Tim; Hong, Chloe; Shah, Nigam H

    2016-09-21

    utilization score, served as a surrogate measure of the model's utility. We obtained the highest area under the curve (0.796) in medical visit prediction with our random forests model and daywise features. Ablating feature categories one at a time showed that the model performance worsened the most when location features were dropped. An online evaluation in which advertisements were served to users who had a high predicted probability of a future medical visit showed a 3.96% increase in the show conversion rate. Results from our experiments done in a research setting suggest that it is possible to accurately predict future patient visits from geotagged mobile search logs. Results from the offline and online experiments on the utility of health utilization predictions suggest that such prediction can have utility for health care providers.

  7. Quantifying spatial distribution of snow depth errors from LiDAR using Random Forests

    Science.gov (United States)

    Tinkham, W.; Smith, A. M.; Marshall, H.; Link, T. E.; Falkowski, M. J.; Winstral, A. H.

    2013-12-01

    There is increasing need to characterize the distribution of snow in complex terrain using remote sensing approaches, especially in isolated mountainous regions that are often water-limited, the principal source of terrestrial freshwater, and sensitive to climatic shifts and variations. We apply intensive topographic surveys, multi-temporal LiDAR, and Random Forest modeling to quantify snow volume and characterize associated errors across seven land cover types in a semi-arid mountainous catchment at a 1 and 4 m spatial resolution. The LiDAR-based estimates of both snow-off surface topology and snow depths were validated against ground-based measurements across the catchment. Comparison of LiDAR-derived snow depths to manual snow depth surveys revealed that LiDAR based estimates were more accurate in areas of low lying vegetation such as shrubs (RMSE = 0.14 m) as compared to areas consisting of tree cover (RMSE = 0.20-0.35 m). The highest errors were found along the edge of conifer forests (RMSE = 0.35 m), however a second conifer transect outside the catchment had much lower errors (RMSE = 0.21 m). This difference is attributed to the wind exposure of the first site that led to highly variable snow depths at short spatial distances. The Random Forest modeled errors deviated from the field measured errors with a RMSE of 0.09-0.34 m across the different cover types. Results show that snow drifts, which are important for maintaining spring and summer stream flows and establishing and sustaining water-limited plant species, contained 30 × 5-6% of the snow volume while only occupying 10% of the catchment area similar to findings by prior physically-based modeling approaches. This study demonstrates the potential utility of combining multi-temporal LiDAR with Random Forest modeling to quantify the distribution of snow depth with a reasonable degree of accuracy. Future work could explore the utility of Terrestrial LiDAR Scanners to produce validation of snow-on surface

  8. Utility of Social Modeling in Assessment of a State's Propensity for Nuclear Proliferation

    International Nuclear Information System (INIS)

    Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.

    2011-01-01

    This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.

  9. Regulatory environment and its impact on the market value of investor-owned electric utilities

    Science.gov (United States)

    Vishwanathan, Raman

    While other regulated industries have one by one been exposed to competitive reform, electric power, for over eighty years, has remained a great monopoly. For all those years, the vertically integrated suppliers of electricity in the United States have been assigned exclusive territorial (consumer) franchises and have been closely regulated. This environment is in the process change because the electric power industry is currently undergoing some dramatic adjustments. Since 1992, a number of states have initiated regulatory reform and are moving to allow retail customers to choose their energy supplier. There has also been a considerable federal government role in encouraging competition in the generation and transmission of electricity. The objective of this research is to investigate the reaction of investors to the prevailing regulatory environment in the electric utility industry by analyzing the market-to-book value for investor-owned electric utilities in the United States as a gauge of investor concern or support for change. In this study, the variable of interest is the market valuation of utilities, as it captures investor confidence to changes in the regulatory environment. Initially a classic regression model is analyzed on the full sample (of the 96 investor-owned utilities for the years 1992 through 1996), providing a total number of 480 (96 firms over 5 years) observations. Later fixed- and random-effects models are analyzed for the same full-sample model specified in the previous analysis. Also, the analysis is carried forward to examine the impact of the size of the utility and its degree of reliability on nuclear power generation on market values. In the period of this study, 1992--1996, the financial security markets downgraded utilities that were still operating in a regulated environment or had a substantial percentage of their power generation from nuclear power plants. It was also found that the financial market was sensitive to the size of

  10. Multiscale model of short cracks in a random polycrystalline aggregate

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.; Petric, Z.

    2006-01-01

    A plane-strain finite element crystal plasticity model of microstructurally small stationary crack emanating at a surface grain in a 316L stainless steel is proposed. The model consisting of 212 randomly shaped, sized and oriented grains is loaded monotonically in uniaxial tension to a maximum load of 1.12Rp0.2 (280MPa). The influence that a random grain structure imposes on a Stage I crack is assessed by calculating the crack tip opening (CTOD) and sliding displacements (CTSD) for single crystal as well as for polycrystal models, considering also different crystallographic orientations. In the single crystal case the CTOD and CTSD may differ by more than one order of magnitude. Near the crack tip slip is activated on all the slip planes whereby only two are active in the rest of the model. The maximum CTOD is directly related to the maximal Schmid factors. For the more complex polycrystal cases it is shown that certain crystallographic orientations result in a cluster of soft grains around the crack-containing grain. In these cases the crack tip can become a part of the localized strain, resulting in a large CTOD value. This effect, resulting from the overall grain orientations and sizes, can have a greater impact on the CTOD than the local grain orientation. On the other hand, when a localized soft response is formed away from the crack, the localized strain does not affect the crack tip directly, resulting in a small CTOD value. The resulting difference in CTOD can be up to a factor of 4, depending upon the crystallographic set. Grains as far as 6 times the value of crack length significantly influence that crack tip parameters. It was also found that a larger crack containing grain tends to increase the CTOD. Finally, smaller than expected drop in the CTOD (12.7%) was obtained as the crack approached the grain boundary. This could be due to the assumption of the unchanged crack direction, only monotonic loading and simplified grain boundary modelling. (author)

  11. Mesoscale model response to random, surface-based perturbations — A sea-breeze experiment

    Science.gov (United States)

    Garratt, J. R.; Pielke, R. A.; Miller, W. F.; Lee, T. J.

    1990-09-01

    The introduction into a mesoscale model of random (in space) variations in roughness length, or random (in space and time) surface perturbations of temperature and friction velocity, produces a measurable, but barely significant, response in the simulated flow dynamics of the lower atmosphere. The perturbations are an attempt to include the effects of sub-grid variability into the ensemble-mean parameterization schemes used in many numerical models. Their magnitude is set in our experiments by appeal to real-world observations of the spatial variations in roughness length and daytime surface temperature over the land on horizontal scales of one to several tens of kilometers. With sea-breeze simulations, comparisons of a number of realizations forced by roughness-length and surface-temperature perturbations with the standard simulation reveal no significant change in ensemble mean statistics, and only small changes in the sea-breeze vertical velocity. Changes in the updraft velocity for individual runs, of up to several cms-1 (compared to a mean of 14 cms-1), are directly the result of prefrontal temperature changes of 0.1 to 0.2K, produced by the random surface forcing. The correlation and magnitude of the changes are entirely consistent with a gravity-current interpretation of the sea breeze.

  12. Bayesian Hierarchical Random Effects Models in Forensic Science

    Directory of Open Access Journals (Sweden)

    Colin G. G. Aitken

    2018-04-01

    Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  13. Bayesian Hierarchical Random Effects Models in Forensic Science.

    Science.gov (United States)

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  14. First steps towards a state classification in the random-field Ising model

    International Nuclear Information System (INIS)

    Basso, Vittorio; Magni, Alessandro; Bertotti, Giorgio

    2006-01-01

    The properties of locally stable states of the random-field Ising model are studied. A map is defined for the dynamics driven by the field starting from a locally stable state. The fixed points of the map are connected with the limit hysteresis loops that appear in the classification of the states

  15. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    Science.gov (United States)

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  16. The random cluster model and a new integration identity

    International Nuclear Information System (INIS)

    Chen, L C; Wu, F Y

    2005-01-01

    We evaluate the free energy of the random cluster model at its critical point for 0 -1 (√q/2) is a rational number. As a by-product, our consideration leads to a closed-form evaluation of the integral 1/(4π 2 ) ∫ 0 2π dΘ ∫ 0 2π dΦ ln[A+B+C - AcosΘ - BcosΦ - Ccos(Θ+Φ)] = -ln(2S) + (2/π)[Ti 2 (AS) + Ti 2 (BS) + Ti 2 (CS)], which arises in lattice statistics, where A, B, C ≥ 0 and S=1/√(AB + BC + CA)

  17. Clustering Single-Cell Expression Data Using Random Forest Graphs.

    Science.gov (United States)

    Pouyan, Maziyar Baran; Nourani, Mehrdad

    2017-07-01

    Complex tissues such as brain and bone marrow are made up of multiple cell types. As the study of biological tissue structure progresses, the role of cell-type-specific research becomes increasingly important. Novel sequencing technology such as single-cell cytometry provides researchers access to valuable biological data. Applying machine-learning techniques to these high-throughput datasets provides deep insights into the cellular landscape of the tissue where those cells are a part of. In this paper, we propose the use of random-forest-based single-cell profiling, a new machine-learning-based technique, to profile different cell types of intricate tissues using single-cell cytometry data. Our technique utilizes random forests to capture cell marker dependences and model the cellular populations using the cell network concept. This cellular network helps us discover what cell types are in the tissue. Our experimental results on public-domain datasets indicate promising performance and accuracy of our technique in extracting cell populations of complex tissues.

  18. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    2009-01-01

    , and utility-based choice accommodating alternatively nested substitution patterns across the alternatives and random taste variation across the population. The proposed model is applied to off-campus rental apartment choice of students. Results show (i) the estimated model for a universal realm of 200...

  19. Random walks and diffusion on networks

    Science.gov (United States)

    Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud

    2017-11-01

    Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

  20. Utility Function for modeling Group Multicriteria Decision Making problems as games

    OpenAIRE

    Alexandre Bevilacqua Leoneti

    2016-01-01

    To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...

  1. Lévy Random Bridges and the Modelling of Financial Information

    OpenAIRE

    Edward Hoyle; Lane P. Hughston; Andrea Macrina

    2010-01-01

    The information-based asset-pricing framework of Brody, Hughston and Mac- rina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expecta- tions of the cash flows. The conditional expectations are taken with respect to a ¯ltration generated by a set of 'information processes'. The information pro- cesses carry imperfec...

  2. Ferrimagnetic Properties of Bond Dilution Mixed Blume-Capel Model with Random Single-Ion Anisotropy

    International Nuclear Information System (INIS)

    Liu Lei; Yan Shilei

    2005-01-01

    We study the ferrimagnetic properties of spin 1/2 and spin-1 systems by means of the effective field theory. The system is considered in the framework of bond dilution mixed Blume-Capel model (BCM) with random single-ion anisotropy. The investigation of phase diagrams and magnetization curves indicates the existence of induced magnetic ordering and single or multi-compensation points. Special emphasis is placed on the influence of bond dilution and random single-ion anisotropy on normal or induced magnetic ordering states and single or multi-compensation points. Normal magnetic ordering states take on new phase diagrams with increasing randomness (bond and anisotropy), while anisotropy induced magnetic ordering states are always occurrence no matter whether concentration of anisotropy is large or small. Existence and disappearance of compensation points rely strongly on bond dilution and random single-ion anisotropy. Some results have not been revealed in previous papers and predicted by Neel theory of ferrimagnetism.

  3. Energy utilization associated with regular activity breaks and continuous physical activity: A randomized crossover trial.

    Science.gov (United States)

    Fenemor, S P; Homer, A R; Perry, T L; Skeaff, C M; Peddie, M C; Rehrer, N J

    2018-06-01

    To quantify and compare energy utilization associated with prolonged sitting alone, or interrupted with regular activity breaks and/or an additional bout of continuous physical activity. Thirty six adults (11 males, BMI 24.1 ± 4.6) completed four interventions: (1) prolonged sitting (SIT), (2) sitting with 2-min of walking every 30 min (RAB), (3) prolonged sitting with 30-min of continuous walking at the end of the day (SIT + PA), (4) a combination of the activities in (2) and (3) above (RAB + PA). All walking was at a speed and incline corresponding to 60% V̇O 2max . Energy utilization over 7 h for each intervention was estimated using indirect calorimetry. Compared to SIT, SIT + PA increased total energy utilization by 709 kJ (95% CI 485-933 kJ), RAB by 863 kJ (95% CI 638-1088 kJ), and RAB + PA by 1752 kJ (95% CI 1527-1927 kJ) (all p energy utilization between SIT + PA and RAB, however, post-physical activity energy utilization in RAB was 632 kJ greater than SIT + PA (95% CI 561-704 kJ; p energy utilization compared to a single bout of continuous activity; however the total energy utilization is similar. Combining activity breaks with a longer continuous bout of activity will further enhance energy utilization, and in the longer term, may positively affect weight management of a greater magnitude than either activity pattern performed alone. ANZCTR12614000624684. Copyright © 2018 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.

  4. Coupled continuous time-random walks in quenched random environment

    Science.gov (United States)

    Magdziarz, M.; Szczotka, W.

    2018-02-01

    We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.

  5. High utility-itemset mining and privacy-preserving utility mining

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2016-03-01

    Full Text Available In recent decades, high-utility itemset mining (HUIM has emerging a critical research topic since the quantity and profit factors are both concerned to mine the high-utility itemsets (HUIs. Generally, data mining is commonly used to discover interesting and useful knowledge from massive data. It may, however, lead to privacy threats if private or secure information (e.g., HUIs are published in the public place or misused. In this paper, we focus on the issues of HUIM and privacy-preserving utility mining (PPUM, and present two evolutionary algorithms to respectively mine HUIs and hide the sensitive high-utility itemsets in PPUM. Extensive experiments showed that the two proposed models for the applications of HUIM and PPUM can not only generate the high quality profitable itemsets according to the user-specified minimum utility threshold, but also enable the capability of privacy preserving for private or secure information (e.g., HUIs in real-word applications.

  6. Autonomous Byte Stream Randomizer

    Science.gov (United States)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  7. A theory of solving TAP equations for Ising models with general invariant random matrices

    DEFF Research Database (Denmark)

    Opper, Manfred; Çakmak, Burak; Winther, Ole

    2016-01-01

    We consider the problem of solving TAP mean field equations by iteration for Ising models with coupling matrices that are drawn at random from general invariant ensembles. We develop an analysis of iterative algorithms using a dynamical functional approach that in the thermodynamic limit yields...... the iteration dependent on a Gaussian distributed field only. The TAP magnetizations are stable fixed points if a de Almeida–Thouless stability criterion is fulfilled. We illustrate our method explicitly for coupling matrices drawn from the random orthogonal ensemble....

  8. Brain in flames – animal models of psychosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Mattei D

    2015-05-01

    Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models 

  9. Towards modeling of random lasing in dye doped bio-organic based systems: ray-tracing and cellular automaton analysis

    Science.gov (United States)

    Mitus, A. C.; Stopa, P.; Zaklukiewicz, W.; Pawlik, G.; Mysliwiec, J.; Kajzar, F.; Rau, I.

    2015-08-01

    One of many photonic applications of biopolymers as functional materials is random lasing resulting from an incorporation of highly luminescent dyes into biopolymeric matrix, which leads to a random but coherent light scattering in amplifying medium. In spite of numerous theoretical and experimental studies the origin of the coherence is still not clear and various scenarios are discussed. In particular, inhomogeneity of biopolymeric layers can hypothetically promote the feedback in the scattering of the emitted light resulting in coherent and incoherent random lasing. In this paper we analyze the light scattering in a model system of scattering centers of circular shapes and various dimensions using ray-tracing techniques. In the second part, which has mostly a tutorial character, we present the approach to the study of random lasing using a cellular automaton model of Wiersma et al.

  10. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    Science.gov (United States)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  11. Methodology and results of the impacts of modeling electric utilities: a comparative evaluation of MEMM and REM

    International Nuclear Information System (INIS)

    1981-09-01

    This study compares two models of the US electric utility industry including the EIA's electric utility submodel in the Midterm Energy Market Model (MEMM), and the Baughman-Joskow Regionalized Electricity Model (REM). The method of comparison emphasizes reconciliation of differences in data common to both models, and the performance of simulation experiments to evaluate the empirical significance of certain structural differences in the models. The major research goal was to contrast and compare the effects of alternative modeling structures and data assumptions on model results; and, particularly to considered each model's approach to the impacts of generation technology and fuel use choices on electric utilities. The methodology used was to run the REM model first without and, then, with a representation of the Power Plant and Industrial Fuel Act of 1978, assuming medium supply and demand curves and varying fuel prices. The models and data structures of the two models are described. The original 1978 data used in MEMM and REM are analyzed and compared. The computations and effects of different assumptions on fuel use decisions are discussed. The adjusted REM data required for the experiments are presented. Simulation results of the two models are compared. These results represent projections for 1985, 1990, and 1995 of: US power generation by plant type; amounts of each type of fuel used for power generation; average electricity prices; and the effects of additional or fewer nuclear and coal-fired plants. A significant result is that the REM model exhibits about 7 times as much gas and oil consumption in 1995 as the MEMM model. Continuing simulation experiments on MEMM are recommended to determine whether the input data to MEMM are reasonable and properly adjusted

  12. Generalized Dynamic Panel Data Models with Random Effects for Cross-Section and Time

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.

    2014-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a nonlinear non-Gaussian dynamic panel data model with unobserved random individual-specific and time-varying effects. We propose an estimation procedure based on the importance sampling technique. In particular, a

  13. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Random matrix model of adiabatic quantum computing

    International Nuclear Information System (INIS)

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-01-01

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size

  15. Cancerous tissue mapping from random lasing emission spectra

    International Nuclear Information System (INIS)

    Polson, R C; Vardeny, Z V

    2010-01-01

    Random lasing emission spectra have been collected from both healthy and cancerous tissues. The two types of tissue with optical gain have different light scattering properties as obtained from an average power Fourier transform of their random lasing emission spectra. The difference in the power Fourier transform leads to a contrast between cancerous and benign tissues, which is utilized for tissue mapping of healthy and cancerous regions of patients

  16. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  17. Factors Affecting Utilization of Cooking Banana among Households ...

    African Journals Online (AJOL)

    The study investigated factors affecting utilization of cooking banana among households in Oguta area of Imo State, Nigeria. Data were collected from 84 randomly selected respondents from six communities in the study area who were administered with structured questionnaire. Data analysis was by use of descriptive ...

  18. Gender Factor in Computer Anxiety, Knowledge and Utilization ...

    African Journals Online (AJOL)

    The study investigated the influence of computer anxiety and knowledge on computer utilization among senior secondary school students in Ogun state, Nigeria. A sample of four hundred students randomly selected from twenty secondary schools participated in the study. Ex-post facto research design method was adopted ...

  19. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  20. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.