Hedonic travel cost and random utility models of recreation
Energy Technology Data Exchange (ETDEWEB)
Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies
1998-07-09
Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Utility based maintenance analysis using a Random Sign censoring model
International Nuclear Information System (INIS)
Andres Christen, J.; Ruggeri, Fabrizio; Villa, Enrique
2011-01-01
Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.
Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R
2016-10-01
Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Estimating safety effects of pavement management factors utilizing Bayesian random effect models.
Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong
2013-01-01
Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic
Directory of Open Access Journals (Sweden)
Chong Wei
2015-01-01
Full Text Available Logistic regression models have been widely used in previous studies to analyze public transport utilization. These studies have shown travel time to be an indispensable variable for such analysis and usually consider it to be a deterministic variable. This formulation does not allow us to capture travelers’ perception error regarding travel time, and recent studies have indicated that this error can have a significant effect on modal choice behavior. In this study, we propose a logistic regression model with a hierarchical random error term. The proposed model adds a new random error term for the travel time variable. This term structure enables us to investigate travelers’ perception error regarding travel time from a given choice behavior dataset. We also propose an extended model that allows constraining the sign of this error in the model. We develop two Gibbs samplers to estimate the basic hierarchical model and the extended model. The performance of the proposed models is examined using a well-known dataset.
Lee, Kyoungyeul; Lee, Minho; Kim, Dongsup
2017-12-28
The identification of target molecules is important for understanding the mechanism of "target deconvolution" in phenotypic screening and "polypharmacology" of drugs. Because conventional methods of identifying targets require time and cost, in-silico target identification has been considered an alternative solution. One of the well-known in-silico methods of identifying targets involves structure activity relationships (SARs). SARs have advantages such as low computational cost and high feasibility; however, the data dependency in the SAR approach causes imbalance of active data and ambiguity of inactive data throughout targets. We developed a ligand-based virtual screening model comprising 1121 target SAR models built using a random forest algorithm. The performance of each target model was tested by employing the ROC curve and the mean score using an internal five-fold cross validation. Moreover, recall rates for top-k targets were calculated to assess the performance of target ranking. A benchmark model using an optimized sampling method and parameters was examined via external validation set. The result shows recall rates of 67.6% and 73.9% for top-11 (1% of the total targets) and top-33, respectively. We provide a website for users to search the top-k targets for query ligands available publicly at http://rfqsar.kaist.ac.kr . The target models that we built can be used for both predicting the activity of ligands toward each target and ranking candidate targets for a query ligand using a unified scoring scheme. The scores are additionally fitted to the probability so that users can estimate how likely a ligand-target interaction is active. The user interface of our web site is user friendly and intuitive, offering useful information and cross references.
Random regret and random utility in the household purchase of a motor vehicle
Beck, M.; Chorus, C.G.; Rose, J.M.; Hensher, D.A.
2013-01-01
Random utility maximisation is the preeminent behavioural theory used to model choices. An alternative paradigm, however, is random regret minimisation. While the majority of the literature examines the choices of individuals, this paper compares the choices of groups, as well individuals, in both
Directory of Open Access Journals (Sweden)
Bahri Karlı
2006-10-01
Full Text Available Farmers’ decision and perceptions to be a member of agricultural cooperatives in the South Eastern Anatolian Region were investigated. Factors affecting the probability of joining the agricultural cooperatives were determined using binary logit model. The model released that most of variables such as education, high communication, log of gross income, farm size, medium and high technology variables play important roles in determining the probability of entrance. Small farmers are likely expected to join the agricultural cooperatives than the wealthier farmers are. Small farmers may wish to benefit cash at hand, input subsidies, and services provided by the agricultural cooperatives since the risks associated with intensive high-returning crops are high. Some important factors playing pole role in abstention of farmers towards agricultural cooperatives are gross income and some social status variables. In addition, conservative or orthodox farmers are less likely to join agricultural cooperatives than moderate farmers are. We also found that the direct government farm credit programs mainly should be objected to providing farmers to better access to capital markets and creating the opportunity to use with allocation of capital inputs via using modern technology.
The utility target market model
International Nuclear Information System (INIS)
Leng, G.J.; Martin, J.
1994-01-01
A new model (the Utility Target Market Model) is used to evaluate the economic benefits of photovoltaic (PV) power systems located at the electrical utility customer site. These distributed PV demand-side generation systems can be evaluated in a similar manner to other demand-side management technologies. The energy and capacity values of an actual PV system located in the service area of the New England Electrical System (NEES) are the two utility benefits evaluated. The annual stream of energy and capacity benefits calculated for the utility are converted to the installed cost per watt that the utility should be willing to invest to receive this benefit stream. Different discount rates are used to show the sensitivity of the allowable installed cost of the PV systems to a utility's average cost of capital. Capturing both the energy and capacity benefits of these relatively environmentally friendly distributed generators, NEES should be willing to invest in this technology when the installed cost per watt declines to ca $2.40 using NEES' rated cost of capital (8.78%). If a social discount rate of 3% is used, installation should be considered when installed cost approaches $4.70/W. Since recent installations in the Sacramento Municipal Utility District have cost between $7-8/W, cost-effective utility applications of PV are close. 22 refs., 1 fig., 2 tabs
Entropy Characterization of Random Network Models
Directory of Open Access Journals (Sweden)
Pedro J. Zufiria
2017-06-01
Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.
Schachtel, Bernard; Aspley, Sue; Shephard, Adrian; Shea, Timothy; Smith, Gary; Schachtel, Emily
2014-07-03
The sore throat pain model has been conducted by different clinical investigators to demonstrate the efficacy of acute analgesic drugs in single-dose randomized clinical trials. The model used here was designed to study the multiple-dose safety and efficacy of lozenges containing flurbiprofen at 8.75 mg. Adults (n=198) with moderate or severe acute sore throat and findings of pharyngitis on a Tonsillo-Pharyngitis Assessment (TPA) were randomly assigned to use either flurbiprofen 8.75 mg lozenges (n=101) or matching placebo lozenges (n=97) under double-blind conditions. Patients sucked one lozenge every three to six hours as needed, up to five lozenges per day, and rated symptoms on 100-mm scales: the Sore Throat Pain Intensity Scale (STPIS), the Difficulty Swallowing Scale (DSS), and the Swollen Throat Scale (SwoTS). Reductions in pain (lasting for three hours) and in difficulty swallowing and throat swelling (for four hours) were observed after a single dose of the flurbiprofen 8.75 mg lozenge (Pflurbiprofen-treated patients experienced a 59% greater reduction in throat pain, 45% less difficulty swallowing, and 44% less throat swelling than placebo-treated patients (all Pflurbiprofen 8.75 mg lozenges were shown to be an effective, well-tolerated treatment for sore throat pain. Other pharmacologic actions (reduced difficulty swallowing and reduced throat swelling) and overall patient satisfaction from the flurbiprofen lozenges were also demonstrated in this multiple-dose implementation of the sore throat pain model. This trial was registered with ClinicalTrials.gov, registration number: NCT01048866, registration date: January 13, 2010.
New Energy Utility Business Models
International Nuclear Information System (INIS)
Potocnik, V.
2016-01-01
Recently a lot of big changes happened in the power sector: energy efficiency and renewable energy sources are quickly progressing, distributed or decentralised generation of electricity is expanding, climate change requires reduction of greenhouse gas emissions and price volatility and incertitude of fossil fuel supply is common. Those changes have led to obsolescence of vertically integrated business models which have dominated in energy utility organisations for a hundred years and new business models are being introduced. Those models take into account current changes in the power sector and enable a wider application of energy efficiency and renewable energy sources, especially for consumers, with the decentralisation of electricity generation and complying with the requirements of climate and environment preservation. New business models also solve the questions of financial compensations for utilities because of the reduction of centralised energy generation while contributing to local development and employment.(author).
A random regret minimization model of travel choice
Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.
2008-01-01
Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel
Alternative model of random surfaces
International Nuclear Information System (INIS)
Ambartzumian, R.V.; Sukiasian, G.S.; Savvidy, G.K.; Savvidy, K.G.
1992-01-01
We analyse models of triangulated random surfaces and demand that geometrically nearby configurations of these surfaces must have close actions. The inclusion of this principle drives us to suggest a new action, which is a modified Steiner functional. General arguments, based on the Minkowski inequality, shows that the maximal distribution to the partition function comes from surfaces close to the sphere. (orig.)
Randomized Item Response Theory Models
Fox, Gerardus J.A.
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by
Random Intercept and Random Slope 2-Level Multilevel Models
Directory of Open Access Journals (Sweden)
Rehan Ahmad Khan
2012-11-01
Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.
Unified Model for Generation Complex Networks with Utility Preferential Attachment
International Nuclear Information System (INIS)
Wu Jianjun; Gao Ziyou; Sun Huijun
2006-01-01
In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.
Utilities for high performance dispersion model PHYSIC
International Nuclear Information System (INIS)
Yamazawa, Hiromi
1992-09-01
The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)
Smooth random change point models.
van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E
2011-03-15
Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.
mathematical models for estimating radio channels utilization
African Journals Online (AJOL)
2017-08-08
Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.
Generalization of Random Intercept Multilevel Models
Directory of Open Access Journals (Sweden)
Rehan Ahmad Khan
2013-10-01
Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.
Insider Models with Finite Utility in Markets with Jumps
International Nuclear Information System (INIS)
Kohatsu-Higa, Arturo; Yamazato, Makoto
2011-01-01
In this article we consider, under a Lévy process model for the stock price, the utility optimization problem for an insider agent whose additional information is the final price of the stock blurred with an additional independent noise which vanishes as the final time approaches. Our main interest is establishing conditions under which the utility of the insider is finite. Mathematically, the problem entails the study of a “progressive” enlargement of filtration with respect to random measures. We study the jump structure of the process which leads to the conclusion that in most cases the utility of the insider is finite and his optimal portfolio is bounded. This can be explained financially by the high risks involved in models with jumps.
Infinite Random Graphs as Statistical Mechanical Models
DEFF Research Database (Denmark)
Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria
2011-01-01
We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...
Estimating the demand for drop-off recycling sites: a random utility travel cost approach.
Sidique, Shaufique F; Lupi, Frank; Joshi, Satish V
2013-09-30
Drop-off recycling is one of the most widely adopted recycling programs in the United States. Despite its wide implementation, relatively little literature addresses the demand for drop-off recycling. This study examines the demand for drop-off recycling sites as a function of travel costs and various site characteristics using the random utility model (RUM). The findings of this study indicate that increased travel costs significantly reduce the frequency of visits to drop-off sites implying that the usage pattern of a site is influenced by its location relative to where people live. This study also demonstrates that site specific characteristics such as hours of operation, the number of recyclables accepted, acceptance of commingled recyclables, and acceptance of yard-waste affect the frequency of visits to drop-off sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
Random matrix model for disordered conductors
Indian Academy of Sciences (India)
In the interpretation of transport properties of mesoscopic systems, the multichannel ... One defines the random matrix model with N eigenvalues 0. λТ ..... With heuristic arguments, using the ideas pertaining to Dyson Coulomb gas analogy,.
The random walk model of intrafraction movement
International Nuclear Information System (INIS)
Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M
2013-01-01
The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)
The random walk model of intrafraction movement.
Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M
2013-04-07
The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.
Rank dependent expected utility models of tax evasion.
Erling Eide
2001-01-01
In this paper the rank-dependent expected utility theory is substituted for the expected utility theory in models of tax evasion. It is demonstrated that the comparative statics results of the expected utility, portfolio choice model of tax evasion carry over to the more general rank dependent expected utility model.
A Generalized Random Regret Minimization Model
Chorus, C.G.
2013-01-01
This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM
Computer simulations of the random barrier model
DEFF Research Database (Denmark)
Schrøder, Thomas; Dyre, Jeppe
2002-01-01
A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...
RMBNToolbox: random models for biochemical networks
Directory of Open Access Journals (Sweden)
Niemi Jari
2007-05-01
Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.
Animal Models Utilized in HTLV-1 Research
Directory of Open Access Journals (Sweden)
Amanda R. Panfil
2013-01-01
Full Text Available Since the isolation and discovery of human T-cell leukemia virus type 1 (HTLV-1 over 30 years ago, researchers have utilized animal models to study HTLV-1 transmission, viral persistence, virus-elicited immune responses, and HTLV-1-associated disease development (ATL, HAM/TSP. Non-human primates, rabbits, rats, and mice have all been used to help understand HTLV-1 biology and disease progression. Non-human primates offer a model system that is phylogenetically similar to humans for examining viral persistence. Viral transmission, persistence, and immune responses have been widely studied using New Zealand White rabbits. The advent of molecular clones of HTLV-1 has offered the opportunity to assess the importance of various viral genes in rabbits, non-human primates, and mice. Additionally, over-expression of viral genes using transgenic mice has helped uncover the importance of Tax and Hbz in the induction of lymphoma and other lymphocyte-mediated diseases. HTLV-1 inoculation of certain strains of rats results in histopathological features and clinical symptoms similar to that of humans with HAM/TSP. Transplantation of certain types of ATL cell lines in immunocompromised mice results in lymphoma. Recently, “humanized” mice have been used to model ATL development for the first time. Not all HTLV-1 animal models develop disease and those that do vary in consistency depending on the type of monkey, strain of rat, or even type of ATL cell line used. However, the progress made using animal models cannot be understated as it has led to insights into the mechanisms regulating viral replication, viral persistence, disease development, and, most importantly, model systems to test disease treatments.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
DEFF Research Database (Denmark)
Kaplan, Sigal; Prato, Carlo Giacomo
This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...
DEFF Research Database (Denmark)
Kaplan, Sigal; Prato, Carlo Giacomo
2012-01-01
This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...
Modeling regulated water utility investment incentives
Padula, S.; Harou, J. J.
2014-12-01
This work attempts to model the infrastructure investment choices of privatized water utilities subject to rate of return and price cap regulation. The goal is to understand how regulation influences water companies' investment decisions such as their desire to engage in transfers with neighbouring companies. We formulate a profit maximization capacity expansion model that finds the schedule of new supply, demand management and transfer schemes that maintain the annual supply-demand balance and maximize a companies' profit under the 2010-15 price control process in England. Regulatory incentives for costs savings are also represented in the model. These include: the CIS scheme for the capital expenditure (capex) and incentive allowance schemes for the operating expenditure (opex) . The profit-maximizing investment program (what to build, when and what size) is compared with the least cost program (social optimum). We apply this formulation to several water companies in South East England to model performance and sensitivity to water network particulars. Results show that if companies' are able to outperform the regulatory assumption on the cost of capital, a capital bias can be generated, due to the fact that the capital expenditure, contrarily to opex, can be remunerated through the companies' regulatory capital value (RCV). The occurrence of the 'capital bias' or its entity depends on the extent to which a company can finance its investments at a rate below the allowed cost of capital. The bias can be reduced by the regulatory penalties for underperformances on the capital expenditure (CIS scheme); Sensitivity analysis can be applied by varying the CIS penalty to see how and to which extent this impacts the capital bias effect. We show how regulatory changes could potentially be devised to partially remove the 'capital bias' effect. Solutions potentially include allowing for incentives on total expenditure rather than separately for capex and opex and allowing
Random effect selection in generalised linear models
DEFF Research Database (Denmark)
Denwood, Matt; Houe, Hans; Forkman, Björn
We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...
A sequential model for the structure of health care utilization.
Herrmann, W.J.; Haarmann, A.; Baerheim, A.
2017-01-01
Traditional measurement models of health care utilization are not able to represent the complex structure of health care utilization. In this qualitative study, we, therefore, developed a new model to represent the health care utilization structure. In Norway and Germany, we conducted episodic
A Utility Model for Teaching Load Decisions in Academic Departments.
Massey, William F.; Zemsky, Robert
1997-01-01
Presents a utility model for academic department decision making and describes the structural specifications for analyzing it. The model confirms the class-size utility asymmetry predicted by the authors' academic rachet theory, but shows that marginal utility associated with college teaching loads is always negative. Curricular structure and…
Modelling of biomass utilization for energy purpose
Energy Technology Data Exchange (ETDEWEB)
Grzybek, Anna [ed.
2010-07-01
the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland has been
Modelling of biomass utilization for energy purpose
Energy Technology Data Exchange (ETDEWEB)
Grzybek, Anna (ed.)
2010-07-01
the overall farms structure, farms land distribution on several separate subfields for one farm, villages' overpopulation and very high employment in agriculture (about 27% of all employees in national economy works in agriculture). Farmers have low education level. In towns 34% of population has secondary education and in rural areas - only 15-16%. Less than 2% inhabitants of rural areas have higher education. The structure of land use is as follows: arable land 11.5%, meadows and pastures 25.4%, forests 30.1%. Poland requires implementation of technical and technological progress for intensification of agricultural production. The reason of competition for agricultural land is maintenance of the current consumption level and allocation of part of agricultural production for energy purposes. Agricultural land is going to be key factor for biofuels production. In this publication research results for the Project PL0073 'Modelling of energetical biomass utilization for energy purposes' have been presented. The Project was financed from the Norwegian Financial Mechanism and European Economic Area Financial Mechanism. The publication is aimed at moving closer and explaining to the reader problems connected with cultivations of energy plants and dispelling myths concerning these problems. Exchange of fossil fuels by biomass for heat and electric energy production could be significant input in carbon dioxide emission reduction. Moreover, biomass crop and biomass utilization for energetical purposes play important role in agricultural production diversification in rural areas transformation. Agricultural production widening enables new jobs creation. Sustainable development is going to be fundamental rule for Polish agriculture evolution in long term perspective. Energetical biomass utilization perfectly integrates in the evolution frameworks, especially on local level. There are two facts. The fist one is that increase of interest in energy crops in Poland
A random walk model to evaluate autism
Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.
2018-02-01
A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.
Random matrix models for phase diagrams
International Nuclear Information System (INIS)
Vanderheyden, B; Jackson, A D
2011-01-01
We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied.
Deriving the expected utility of a predictive model when the utilities are uncertain.
Cooper, Gregory F; Visweswaran, Shyam
2005-01-01
Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.
Mathematical models for estimating radio channels utilization when ...
African Journals Online (AJOL)
Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...
Particle filters for random set models
Ristic, Branko
2013-01-01
“Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based on the Monte Carlo statistical method. The resulting algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...
Connectivity ranking of heterogeneous random conductivity models
Rizzo, C. B.; de Barros, F.
2017-12-01
To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.
Modeling superhydrophobic surfaces comprised of random roughness
Samaha, M. A.; Tafreshi, H. Vahedi; Gad-El-Hak, M.
2011-11-01
We model the performance of superhydrophobic surfaces comprised of randomly distributed roughness that resembles natural surfaces, or those produced via random deposition of hydrophobic particles. Such a fabrication method is far less expensive than ordered-microstructured fabrication. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridge configurations for pipe flows. The present results are compared with other theoretical and experimental studies. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. Financial support from DARPA, contract number W91CRB-10-1-0003, is acknowledged.
A random matrix model of relaxation
International Nuclear Information System (INIS)
Lebowitz, J L; Pastur, L
2004-01-01
We consider a two-level system, S 2 , coupled to a general n level system, S n , via a random matrix. We derive an integral representation for the mean reduced density matrix ρ(t) of S 2 in the limit n → ∞, and we identify a model of S n which possesses some of the properties expected for macroscopic thermal reservoirs. In particular, it yields the Gibbs form for ρ(∞). We also consider an analog of the van Hove limit and obtain a master equation (Markov dynamics) for the evolution of ρ(t) on an appropriate time scale
A randomized trial of treatments for high-utilizing somatizing patients.
Barsky, Arthur J; Ahern, David K; Bauer, Mark R; Nolido, Nyryan; Orav, E John
2013-11-01
Somatization and hypochondriacal health anxiety are common sources of distress, impairment, and costly medical utilization in primary care practice. A range of interventions is needed to improve the care of these patients. To determine the effectiveness of two cognitive behavioral interventions for high-utilizing, somatizing patients, using the resources found in a routine care setting. Patients were randomly assigned to a two-step cognitive behavioral treatment program accompanied by a training seminar for their primary care physicians, or to relaxation training. Providers routinely working in these patients' primary care practices delivered the cognitive behavior therapy and relaxation training. A follow-up assessment was completed immediately prior to treatment and 6 and 12 months later. Eighty-nine medical outpatients with elevated levels of somatization, hypochondriacal health anxiety, and medical care utilization. Somatization and hypochondriasis, overall psychiatric distress, and role impairment were assessed with well-validated, self-report questionnaires. Outpatient visits and medical care costs before and after the intervention were obtained from the encounter claims database. At 6 month and 12 month follow-up, both intervention groups showed significant improvements in somatization (p somatization, hypochondriacal symptoms, overall psychiatric distress, and role function. They also reduced the ambulatory visits and costs of these high utilizing outpatients.
Ising model of a randomly triangulated random surface as a definition of fermionic string theory
International Nuclear Information System (INIS)
Bershadsky, M.A.; Migdal, A.A.
1986-01-01
Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)
International Nuclear Information System (INIS)
Chorus, Caspar G.; Koetse, Mark J.; Hoen, Anco
2013-01-01
This paper presents a utility-based and a regret-based model of consumer preferences for alternative fuel vehicles, based on a large-scale stated choice-experiment held among company car leasers in The Netherlands. Estimation and application of random utility maximization and random regret minimization discrete choice models shows that while the two models achieve almost identical fit with the data and differ only marginally in terms of predictive ability, they generate rather different choice probability-simulations and policy implications. The most eye-catching difference between the two models is that the random regret minimization model accommodates a compromise-effect, as it assigns relatively high choice probabilities to alternative fuel vehicles that perform reasonably well on each dimension instead of having a strong performance on some dimensions and a poor performance on others. - Highlights: • Utility- and regret-based models of preferences for alternative fuel vehicles. • Estimation based on stated choice-experiment among Dutch company car leasers. • Models generate rather different choice probabilities and policy implications. • Regret-based model accommodates a compromise-effect
Animal models of contraception: utility and limitations
Directory of Open Access Journals (Sweden)
Liechty ER
2015-04-01
Full Text Available Emma R Liechty,1 Ingrid L Bergin,1 Jason D Bell2 1Unit for Laboratory Animal Medicine, 2Program on Women's Health Care Effectiveness Research, Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, USA Abstract: Appropriate animal modeling is vital for the successful development of novel contraceptive devices. Advances in reproductive biology have identified novel pathways for contraceptive intervention. Here we review species-specific anatomic and physiologic considerations impacting preclinical contraceptive testing, including efficacy testing, mechanistic studies, device design, and modeling off-target effects. Emphasis is placed on the use of nonhuman primate models in contraceptive device development. Keywords: nonhuman primate, preclinical, in vivo, contraceptive devices
Animal models of asthma: utility and limitations
Directory of Open Access Journals (Sweden)
Aun MV
2017-11-01
Full Text Available Marcelo Vivolo Aun,1,2 Rafael Bonamichi-Santos,1,2 Fernanda Magalhães Arantes-Costa,2 Jorge Kalil,1 Pedro Giavina-Bianchi1 1Clinical Immunology and Allergy Division, Department of Internal Medicine, University of São Paulo School of Medicine, São Paulo, Brazil, 2Laboratory of Experimental Therapeutics (LIM20, Department of Internal Medicine, University of Sao Paulo, Sao Paulo, Brazil Abstract: Clinical studies in asthma are not able to clear up all aspects of disease pathophysiology. Animal models have been developed to better understand these mechanisms and to evaluate both safety and efficacy of therapies before starting clinical trials. Several species of animals have been used in experimental models of asthma, such as Drosophila, rats, guinea pigs, cats, dogs, pigs, primates and equines. However, the most common species studied in the last two decades is mice, particularly BALB/c. Animal models of asthma try to mimic the pathophysiology of human disease. They classically include two phases: sensitization and challenge. Sensitization is traditionally performed by intraperitoneal and subcutaneous routes, but intranasal instillation of allergens has been increasingly used because human asthma is induced by inhalation of allergens. Challenges with allergens are performed through aerosol, intranasal or intratracheal instillation. However, few studies have compared different routes of sensitization and challenge. The causative allergen is another important issue in developing a good animal model. Despite being more traditional and leading to intense inflammation, ovalbumin has been replaced by aeroallergens, such as house dust mites, to use the allergens that cause human disease. Finally, researchers should define outcomes to be evaluated, such as serum-specific antibodies, airway hyperresponsiveness, inflammation and remodeling. The present review analyzes the animal models of asthma, assessing differences between species, allergens and routes
Random defect lines in conformal minimal models
International Nuclear Information System (INIS)
Jeng, M.; Ludwig, A.W.W.
2001-01-01
We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder
Utilizing Rapid Prototyping for Architectural Modeling
Kirton, E. F.; Lavoie, S. D.
2006-01-01
This paper will discuss our approach to, success with and future direction in rapid prototyping for architectural modeling. The premise that this emerging technology has broad and exciting applications in the building design and construction industry will be supported by visual and physical evidence. This evidence will be presented in the form of…
Evaluation of Usability Utilizing Markov Models
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Jang, S.; Rasouli, S.; Timmermans, H.J.P.
2016-01-01
Recently, regret-based choice models have been introduced in the travel behavior research community as an alternative to expected/random utility models. The fundamental proposition underlying regret theory is that individuals minimize the amount of regret they (are expected to) experience when
Directory of Open Access Journals (Sweden)
Cinnamon S. Bloss
2016-01-01
Full Text Available Background. Mobile health and digital medicine technologies are becoming increasingly used by individuals with common, chronic diseases to monitor their health. Numerous devices, sensors, and apps are available to patients and consumers–some of which have been shown to lead to improved health management and health outcomes. However, no randomized controlled trials have been conducted which examine health care costs, and most have failed to provide study participants with a truly comprehensive monitoring system. Methods. We conducted a prospective randomized controlled trial of adults who had submitted a 2012 health insurance claim associated with hypertension, diabetes, and/or cardiac arrhythmia. The intervention involved receipt of one or more mobile devices that corresponded to their condition(s (hypertension: Withings Blood Pressure Monitor; diabetes: Sanofi iBGStar Blood Glucose Meter; arrhythmia: AliveCor Mobile ECG and an iPhone with linked tracking applications for a period of 6 months; the control group received a standard disease management program. Moreover, intervention study participants received access to an online health management system which provided participants detailed device tracking information over the course of the study. This was a monitoring system designed by leveraging collaborations with device manufacturers, a connected health leader, health care provider, and employee wellness program–making it both unique and inclusive. We hypothesized that health resource utilization with respect to health insurance claims may be influenced by the monitoring intervention. We also examined health-self management. Results & Conclusions. There was little evidence of differences in health care costs or utilization as a result of the intervention. Furthermore, we found evidence that the control and intervention groups were equivalent with respect to most health care utilization outcomes. This result suggests there are not large
Subanti, S.; Irawan, B. R. M. B.; Sasongko, G.; Hakim, A. R.
2017-04-01
This study aims to determine the profit (loss) earned economic actors tourism activities if the condition or quality of tourism in Rawapening be improved (deteriorated). Change condition or quality can be seen by traveling expenses, natural environment, Japanese cultural performances, and traditional markets. The method used to measure changes in the economic benefits or economic loss with a random utility approach. The study was found that travel cost, natural environment, Japanese cultural performances, and traditional markets have significant factors about respondent preferences to choose the change of tourism condition. The value of compensation received by visitors as a result of changes in conditions improved by 2,932 billion, while the change in the condition worsens by 2,628 billion. Recommendation of this study is the local government should consider environmental factors in the formulation of tourism development in Rawapening.
Random matrix model of adiabatic quantum computing
International Nuclear Information System (INIS)
Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.
2005-01-01
We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size
Network Bandwidth Utilization Forecast Model on High Bandwidth Network
Energy Technology Data Exchange (ETDEWEB)
Yoo, Wucherl; Sim, Alex
2014-07-07
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.
Network bandwidth utilization forecast model on high bandwidth networks
Energy Technology Data Exchange (ETDEWEB)
Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-03-30
With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.
Simulation of a directed random-walk model: the effect of pseudo-random-number correlations
Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.
1996-01-01
We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.
Dynamics of the Random Field Ising Model
Xu, Jian
The Random Field Ising Model (RFIM) is a general tool to study disordered systems. Crackling noise is generated when disordered systems are driven by external forces, spanning a broad range of sizes. Systems with different microscopic structures such as disordered mag- nets and Earth's crust have been studied under the RFIM. In this thesis, we investigated the domain dynamics and critical behavior in two dipole-coupled Ising ferromagnets Nd2Fe14B and LiHoxY 1-xF4. With Tc well above room temperature, Nd2Fe14B has shown reversible disorder when exposed to an external transverse field and crosses between two universality classes in the strong and weak disorder limits. Besides tunable disorder, LiHoxY1-xF4 has shown quantum tunneling effects arising from quantum fluctuations, providing another mechanism for domain reversal. Universality within and beyond power law dependence on avalanche size and energy were studied in LiHo0.65Y0.35 F4.
Identification of human operator performance models utilizing time series analysis
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.
Subjective Expected Utility: A Model of Decision-Making.
Fischoff, Baruch; And Others
1981-01-01
Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)
Kinetic models of cell growth, substrate utilization and bio ...
African Journals Online (AJOL)
Bio-decolorization kinetic studies of distillery effluent in a batch culture were conducted using Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, Leudeking-Piret kinetics for bio-decolorization, and also for substrate utilization. The proposed models appeared to provide a suitable ...
Utility of Social Modeling for Proliferation Assessment - Preliminary Assessment
Energy Technology Data Exchange (ETDEWEB)
Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.
2009-06-01
This Preliminary Assessment draft report will present the results of a literature search and preliminary assessment of the body of research, analysis methods, models and data deemed to be relevant to the Utility of Social Modeling for Proliferation Assessment research. This report will provide: 1) a description of the problem space and the kinds of information pertinent to the problem space, 2) a discussion of key relevant or representative literature, 3) a discussion of models and modeling approaches judged to be potentially useful to the research, and 4) the next steps of this research that will be pursued based on this preliminary assessment. This draft report represents a technical deliverable for the NA-22 Simulations, Algorithms, and Modeling (SAM) program. Specifically this draft report is the Task 1 deliverable for project PL09-UtilSocial-PD06, Utility of Social Modeling for Proliferation Assessment. This project investigates non-traditional use of social and cultural information to improve nuclear proliferation assessment, including nonproliferation assessment, proliferation resistance assessments, safeguards assessments and other related studies. These assessments often use and create technical information about the State’s posture towards proliferation, the vulnerability of a nuclear energy system to an undesired event, and the effectiveness of safeguards. This project will find and fuse social and technical information by explicitly considering the role of cultural, social and behavioral factors relevant to proliferation. The aim of this research is to describe and demonstrate if and how social science modeling has utility in proliferation assessment.
Directory of Open Access Journals (Sweden)
John F. Emerson
2015-12-01
, participants had a median of nine total documented contacts with PCMH providers compared to four in the control group. Three intervention and two control participants had controlled diabetes (hemoglobin A1C <9%. Multidisciplinary care that utilizes health coach-facilitated virtual visits is an intervention that could increase access to intensive primary care services in a vulnerable population. The methods tested are feasible and should be tested in a pragmatic randomized controlled trial to evaluate the impact on patient-relevant outcomes across multiple chronic diseases.
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The changing utility workforce and the emergence of building information modeling in utilities
Energy Technology Data Exchange (ETDEWEB)
Saunders, A. [Autodesk Inc., San Rafael, CA (United States)
2010-07-01
Utilities are faced with the extensive replacement of a workforce that is now reaching retirement age. New personnel will have varying skill levels and different expectations in relation to design tools. This paper discussed methods of facilitating knowledge transfer from the retiring workforce to new staff using rules-based design software. It was argued that while nothing can replace the experiential knowledge of long-term engineers, software with built-in validations can accelerate training and building information modelling (BIM) processes. Younger personnel will expect a user interface paradigm that is based on their past gaming and work experiences. Visualization, simulation, and modelling approaches were reviewed. 3 refs.
A Note on the Correlated Random Coefficient Model
DEFF Research Database (Denmark)
Kolodziejczyk, Christophe
In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...
A random energy model for size dependence : recurrence vs. transience
Külske, Christof
1998-01-01
We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as
Compensatory and non-compensatory multidimensional randomized item response models
Fox, J.P.; Entink, R.K.; Avetisyan, M.
2014-01-01
Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been
Olekhno, N. A.; Beltukov, Y. M.
2018-05-01
Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0
Maximizing the model for Discounted Stream of Utility from ...
African Journals Online (AJOL)
Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Some random models in traffic science
Energy Technology Data Exchange (ETDEWEB)
Hjorth, U.
1996-06-01
We give an overview of stochastic models for the following traffic phenomena. Models for traffic flow including gaps and capacities for lanes, crossings and roundabouts. Models for wanted and achieved speed distributions. Mode selection models including dispersed equilibrium models and traffic accident models. Also some statistical questions are discussed. 60 refs, 1 tab
A Model for Random Student Drug Testing
Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle
2011-01-01
The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…
Coupling model of energy consumption with changes in environmental utility
International Nuclear Information System (INIS)
He Hongming; Jim, C.Y.
2012-01-01
This study explores the relationships between metropolis energy consumption and environmental utility changes by a proposed Environmental Utility of Energy Consumption (EUEC) model. Based on the dynamic equilibrium of input–output economics theory, it considers three simulation scenarios: fixed-technology, technological-innovation, and green-building effect. It is applied to analyse Hong Kong in 1980–2007. Continual increase in energy consumption with rapid economic growth degraded environmental utility. First, energy consumption at fixed-technology was determined by economic outcome. In 1990, it reached a critical balanced state when energy consumption was 22×10 9 kWh. Before 1990 (x 1 9 kWh), rise in energy consumption improved both economic development and environmental utility. After 1990 (x 1 >22×10 9 kWh), expansion of energy consumption facilitated socio-economic development but suppressed environmental benefits. Second, technological-innovation strongly influenced energy demand and improved environmental benefits. The balanced state remained in 1999 when energy consumption reached 32.33×10 9 kWh. Technological-innovation dampened energy consumption by 12.99%, exceeding the fixed-technology condition. Finally, green buildings reduced energy consumption by an average of 17.5% in 1990–2007. They contributed significantly to energy saving, and buffered temperature fluctuations between external and internal environment. The case investigations verified the efficiency of the EUEC model, which can effectively evaluate the interplay of energy consumption and environmental quality. - Highlights: ► We explore relationships between metropolis energy consumption and environmental utility. ► An Environmental Utility of Energy Consumption (EUEC) model is proposed. ► Technological innovation mitigates energy consumption impacts on environmental quality. ► Technological innovation decreases demand of energy consumption more than fixed technology scenario
Directory of Open Access Journals (Sweden)
Gabriel Recchia
2015-01-01
Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
Energy Technology Data Exchange (ETDEWEB)
Menking, C. (Niedersaechsischer Staedte- und Gemeindebund, Hannover (Germany, F.R.))
1989-01-01
In 1987, the Committee of Town and Community Administrations of Lower Saxonia established the task force 'Franchise Agreements'. This is a forum where town and community officials interested in energy issues cooperate. The idea was to improve conditions and participation possibilities for local administrations in contracts with their present utilities, and to draw up, and coordinate with the utilities, a franchise agreement creating possibilities for the communities, inter alia, in the sectors power supply concept, advising on energy conservation, energy generation. A model of a franchise agreement for the electricity sector is presented in its full wording. (orig./HSCH).
Resource allocation on computational grids using a utility model and the knapsack problem
Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J
2009-01-01
This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.
Sustainable geothermal utilization - Case histories; definitions; research issues and modelling
International Nuclear Information System (INIS)
Axelsson, Gudni
2010-01-01
Sustainable development by definition meets the needs of the present without compromising the ability of future generations to meet their own needs. The Earth's enormous geothermal resources have the potential to contribute significantly to sustainable energy use worldwide as well as to help mitigate climate change. Experience from the use of numerous geothermal systems worldwide lasting several decades demonstrates that by maintaining production below a certain limit the systems reach a balance between net energy discharge and recharge that may be maintained for a long time (100-300 years). Modelling studies indicate that the effect of heavy utilization is often reversible on a time-scale comparable to the period of utilization. Thus, geothermal resources can be used in a sustainable manner either through (1) constant production below the sustainable limit, (2) step-wise increase in production, (3) intermittent excessive production with breaks, and (4) reduced production after a shorter period of heavy production. The long production histories that are available for low-temperature as well as high-temperature geothermal systems distributed throughout the world, provide the most valuable data available for studying sustainable management of geothermal resources, and reservoir modelling is the most powerful tool available for this purpose. The paper presents sustainability modelling studies for the Hamar and Nesjavellir geothermal systems in Iceland, the Beijing Urban system in China and the Olkaria system in Kenya as examples. Several relevant research issues have also been identified, such as the relevance of system boundary conditions during long-term utilization, how far reaching interference from utilization is, how effectively geothermal systems recover after heavy utilization and the reliability of long-term (more than 100 years) model predictions. (author)
Improving surgeon utilization in an orthopedic department using simulation modeling
Directory of Open Access Journals (Sweden)
Simwita YW
2016-10-01
Full Text Available Yusta W Simwita, Berit I Helgheim Department of Logistics, Molde University College, Molde, Norway Purpose: Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time.Methods: The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization.Results: The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services.Conclusion: This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. Keywords: waiting time, patient, health care process
Analog model for quantum gravity effects: phonons in random fluids.
Krein, G; Menezes, G; Svaiter, N F
2010-09-24
We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.
A cluster expansion approach to exponential random graph models
International Nuclear Information System (INIS)
Yin, Mei
2012-01-01
The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region
A catastrophe model for the prospect-utility theory question.
Oliva, Terence A; McDade, Sean R
2008-07-01
Anomalies have played a big part in the analysis of decision making under risk. Both expected utility and prospect theories were born out of anomalies exhibited by actual decision making behavior. Since the same individual can use both expected utility and prospect approaches at different times, it seems there should be a means of uniting the two. This paper turns to nonlinear dynamical systems (NDS), specifically a catastrophe model, to help suggest an 'out of the box' line of solution toward integration. We use a cusp model to create a value surface whose control dimensions are involvement and gains versus losses. By including 'involvement' as a variable the importance of the individual's psychological state is included, and it provides a rationale for how decision makers' changes from expected utility to prospect might occur. Additionally, it provides a possible explanation for what appears to be even more irrational decisions that individuals make when highly emotionally involved. We estimate the catastrophe model using a sample of 997 gamblers who attended a casino and compare it to the linear model using regression. Hence, we have actual data from individuals making real bets, under real conditions.
Recent advances in modeling nutrient utilization in ruminants.
Kebreab, E; Dijkstra, J; Bannink, A; France, J
2009-04-01
Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, postabsorptive metabolism, and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient utilization and excretion with respect to environmental issues. Accurate prediction of fermentation stoichiometry has an impact on estimating the type of energy-yielding substrate available to the animal, and the ratio of lipogenic to glucogenic VFA is an important determinant of methanogenesis. Recent advances in modeling VFA stoichiometry offer ways for dietary manipulation to shift the fermentation in favor of glucogenic VFA. Increasing energy to the animal by supplementing with starch can lead to health problems such as subacute rumen acidosis caused by rumen pH depression. Mathematical models have been developed to describe changes in rumen pH and rumen fermentation. Models that relate rumen temperature to rumen pH have also been developed and have the potential to aid in the diagnosis of subacute rumen acidosis. The effect of pH has been studied mechanistically, and in such models, fractional passage rate has a large impact on substrate degradation and microbial efficiency in the rumen and should be an important theme in future studies. The efficiency with which energy is utilized by ruminants has been updated in recent studies. Mechanistic models of N utilization indicate that reducing dietary protein concentration, matching protein degradability to the microbial requirement, and increasing the energy status of the animal will reduce the output of N as waste. Recent mechanistic P models calculate the P requirement by taking into account P recycled through saliva and endogenous losses. Mechanistic P models suggest reducing current P amounts for lactating dairy cattle to at least 0.35% P in the diet, with a potential reduction of up to 1.3 kt/yr. A model that
Premium Pricing of Liability Insurance Using Random Sum Model
Directory of Open Access Journals (Sweden)
Mujiati Dwi Kartikasari
2017-03-01
Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
The ising model on the dynamical triangulated random surface
International Nuclear Information System (INIS)
Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.
1990-01-01
The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions
Simulating WTP Values from Random-Coefficient Models
Maurus Rischatsch
2009-01-01
Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest haveÂ increased for the spatial modeling and mapping of continuousÂ variables. Random forest is a non-parametric ensembleÂ approach, and unlike traditional regression approaches thereÂ is no direct quantification of prediction error. UnderstandingÂ prediction uncertainty is important when using model-basedÂ continuous maps as...
A random spatial network model based on elementary postulates
Karlinger, Michael R.; Troutman, Brent M.
1989-01-01
A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.
Zbikowski, Susan M; Jack, Lisa M; McClure, Jennifer B; Deprey, Mona; Javitz, Harold S; McAfee, Timothy A; Catz, Sheryl L; Richards, Julie; Bush, Terry; Swan, Gary E
2011-05-01
Phone counseling has become standard for behavioral smoking cessation treatment. Newer options include Web and integrated phone-Web treatment. No prior research, to our knowledge, has systematically compared the effectiveness of these three treatment modalities in a randomized trial. Understanding how utilization varies by mode, the impact of utilization on outcomes, and predictors of utilization across each mode could lead to improved treatments. One thousand two hundred and two participants were randomized to phone, Web, or combined phone-Web cessation treatment. Services varied by modality and were tracked using automated systems. All participants received 12 weeks of varenicline, printed guides, an orientation call, and access to a phone supportline. Self-report data were collected at baseline and 6-month follow-up. Overall, participants utilized phone services more often than the Web-based services. Among treatment groups with Web access, a significant proportion logged in only once (37% phone-Web, 41% Web), and those in the phone-Web group logged in less often than those in the Web group (mean = 2.4 vs. 3.7, p = .0001). Use of the phone also was correlated with increased use of the Web. In multivariate analyses, greater use of the phone- or Web-based services was associated with higher cessation rates. Finally, older age and the belief that certain treatments could improve success were consistent predictors of greater utilization across groups. Other predictors varied by treatment group. Opportunities for enhancing treatment utilization exist, particularly for Web-based programs. Increasing utilization more broadly could result in better overall treatment effectiveness for all intervention modalities.
Application of random regression models to the genetic evaluation ...
African Journals Online (AJOL)
The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...
Random regression models for detection of gene by environment interaction
Directory of Open Access Journals (Sweden)
Meuwissen Theo HE
2007-02-01
Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.
Modeling of ultrasonic processes utilizing a generic software framework
Bruns, P.; Twiefel, J.; Wallaschek, J.
2017-06-01
Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.
Risk Decision Making Model for Reservoir Floodwater resources Utilization
Huang, X.
2017-12-01
Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.
International Nuclear Information System (INIS)
Ovchinnikov, O. S.; Jesse, S.; Kalinin, S. V.; Bintacchit, P.; Trolier-McKinstry, S.
2009-01-01
An approach for the direct identification of disorder type and strength in physical systems based on recognition analysis of hysteresis loop shape is developed. A large number of theoretical examples uniformly distributed in the parameter space of the system is generated and is decorrelated using principal component analysis (PCA). The PCA components are used to train a feed-forward neural network using the model parameters as targets. The trained network is used to analyze hysteresis loops for the investigated system. The approach is demonstrated using a 2D random-bond-random-field Ising model, and polarization switching in polycrystalline ferroelectric capacitors.
A generalized model via random walks for information filtering
International Nuclear Information System (INIS)
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-01-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.
A generalized model via random walks for information filtering
Energy Technology Data Exchange (ETDEWEB)
Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)
2016-08-06
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.
Money Creation in a Random Matching Model
Alexei Deviatov
2006-01-01
I study money creation in versions of the Trejos-Wright (1995) and Shi (1995) models with indivisible money and individual holdings bounded at two units. I work with the same class of policies as in Deviatov and Wallace (2001), who study money creation in that model. However, I consider an alternative notion of implementability–the ex ante pairwise core. I compute a set of numerical examples to determine whether money creation is beneficial. I find beneficial e?ects of money creation if indiv...
Utility of Small Animal Models of Developmental Programming.
Reynolds, Clare M; Vickers, Mark H
2018-01-01
Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.
Animal models of myasthenia gravis: utility and limitations
Mantegazza, Renato; Cordiglieri, Chiara; Consonni, Alessandra; Baggi, Fulvio
2016-01-01
Myasthenia gravis (MG) is a chronic autoimmune disease caused by the immune attack of the neuromuscular junction. Antibodies directed against the acetylcholine receptor (AChR) induce receptor degradation, complement cascade activation, and postsynaptic membrane destruction, resulting in functional reduction in AChR availability. Besides anti-AChR antibodies, other autoantibodies are known to play pathogenic roles in MG. The experimental autoimmune MG (EAMG) models have been of great help over the years in understanding the pathophysiological role of specific autoantibodies and T helper lymphocytes and in suggesting new therapies for prevention and modulation of the ongoing disease. EAMG can be induced in mice and rats of susceptible strains that show clinical symptoms mimicking the human disease. EAMG models are helpful for studying both the muscle and the immune compartments to evaluate new treatment perspectives. In this review, we concentrate on recent findings on EAMG models, focusing on their utility and limitations. PMID:27019601
Random effects models in clinical research
Cleophas, T. J.; Zwinderman, A. H.
2008-01-01
BACKGROUND: In clinical trials a fixed effects research model assumes that the patients selected for a specific treatment have the same true quantitative effect and that the differences observed are residual error. If, however, we have reasons to believe that certain patients respond differently
Fenemor, S P; Homer, A R; Perry, T L; Skeaff, C M; Peddie, M C; Rehrer, N J
2018-06-01
To quantify and compare energy utilization associated with prolonged sitting alone, or interrupted with regular activity breaks and/or an additional bout of continuous physical activity. Thirty six adults (11 males, BMI 24.1 ± 4.6) completed four interventions: (1) prolonged sitting (SIT), (2) sitting with 2-min of walking every 30 min (RAB), (3) prolonged sitting with 30-min of continuous walking at the end of the day (SIT + PA), (4) a combination of the activities in (2) and (3) above (RAB + PA). All walking was at a speed and incline corresponding to 60% V̇O 2max . Energy utilization over 7 h for each intervention was estimated using indirect calorimetry. Compared to SIT, SIT + PA increased total energy utilization by 709 kJ (95% CI 485-933 kJ), RAB by 863 kJ (95% CI 638-1088 kJ), and RAB + PA by 1752 kJ (95% CI 1527-1927 kJ) (all p energy utilization between SIT + PA and RAB, however, post-physical activity energy utilization in RAB was 632 kJ greater than SIT + PA (95% CI 561-704 kJ; p energy utilization compared to a single bout of continuous activity; however the total energy utilization is similar. Combining activity breaks with a longer continuous bout of activity will further enhance energy utilization, and in the longer term, may positively affect weight management of a greater magnitude than either activity pattern performed alone. ANZCTR12614000624684. Copyright © 2018 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Money creation process in a random redistribution model
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
(Non-) Gibbsianness and Phase Transitions in Random Lattice Spin Models
Külske, C.
1999-01-01
We consider disordered lattice spin models with finite-volume Gibbs measures µΛ[η](dσ). Here σ denotes a lattice spin variable and η a lattice random variable with product distribution P describing the quenched disorder of the model. We ask: when will the joint measures limΛ↑Zd P(dη)µΛ[η](dσ) be
Shape Modelling Using Markov Random Field Restoration of Point Correspondences
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen
2003-01-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...
Simulating intrafraction prostate motion with a random walk model
Directory of Open Access Journals (Sweden)
Tobias Pommer, PhD
2017-07-01
Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.
Single-cluster dynamics for the random-cluster model
Deng, Y.; Qian, X.; Blöte, H.W.J.
2009-01-01
We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...
Directory of Open Access Journals (Sweden)
Steven T Piantadosi
2015-04-01
Full Text Available Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage in choice, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are linearly separable into a psychologically plausible heuristic model (specifically, a dimensional prioritization heuristic that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice.
Piantadosi, Steven T.; Hayden, Benjamin Y.
2015-01-01
Economists often model choices as if decision-makers assign each option a scalar value variable, known as utility, and then select the option with the highest utility. It remains unclear whether as-if utility models describe real mental and neural steps in choice. Although choices alone cannot prove the existence of a utility stage, utility transformations are often taken to provide the most parsimonious or psychologically plausible explanation for choice data. Here, we show that it is possible to mathematically transform a large set of common utility-stage two-option choice models (specifically ones in which dimensions are can be decomposed into additive functions) into a heuristic model (specifically, a dimensional prioritization heuristic) that has no utility computation stage. We then show that under a range of plausible assumptions, both classes of model predict similar neural responses. These results highlight the difficulties in using neuroeconomic data to infer the existence of a value stage in choice. PMID:25914613
Animal models of GM2 gangliosidosis: utility and limitations
Directory of Open Access Journals (Sweden)
Lawson CA
2016-07-01
Full Text Available Cheryl A Lawson,1,2 Douglas R Martin2,3 1Department of Pathobiology, 2Scott-Ritchey Research Center, 3Department of Anatomy, Physiology and Pharmacology, Auburn University College of Veterinary Medicine, Auburn, AL, USA Abstract: GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. Keywords: GM2 gangliosidosis, Tay–Sachs disease, Sandhoff disease, lysosomal storage disorder, sphingolipidosis, brain disease
Animal models of GM2 gangliosidosis: utility and limitations
Lawson, Cheryl A; Martin, Douglas R
2016-01-01
GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. PMID:27499644
The hard-core model on random graphs revisited
International Nuclear Information System (INIS)
Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka
2013-01-01
We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming
Lamplighter model of a random copolymer adsorption on a line
Directory of Open Access Journals (Sweden)
L.I. Nazarov
2014-09-01
Full Text Available We present a model of an AB-diblock random copolymer sequential self-packaging with local quenched interactions on a one-dimensional infinite sticky substrate. It is assumed that the A-A and B-B contacts are favorable, while A-B are not. The position of a newly added monomer is selected in view of the local contact energy minimization. The model demonstrates a self-organization behavior with the nontrivial dependence of the total energy, E (the number of unfavorable contacts, on the number of chain monomers, N: E ~ N^3/4 for quenched random equally probable distribution of A- and B-monomers along the chain. The model is treated by mapping it onto the "lamplighter" random walk and the diffusion-controlled chemical reaction of X+X → 0 type with the subdiffusive motion of reagents.
Some Limits Using Random Slope Models to Measure Academic Growth
Directory of Open Access Journals (Sweden)
Daniel B. Wright
2017-11-01
Full Text Available Academic growth is often estimated using a random slope multilevel model with several years of data. However, if there are few time points, the estimates can be unreliable. While using random slope multilevel models can lower the variance of the estimates, these procedures can produce more highly erroneous estimates—zero and negative correlations with the true underlying growth—than using ordinary least squares estimates calculated for each student or school individually. An example is provided where schools with increasing graduation rates are estimated to have negative growth and vice versa. The estimation is worse when the underlying data are skewed. It is recommended that there are at least six time points for estimating growth if using a random slope model. A combination of methods can be used to avoid some of the aberrant results if it is not possible to have six or more time points.
The random field Blume-Capel model revisited
Santos, P. V.; da Costa, F. A.; de Araújo, J. M.
2018-04-01
We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.
A workflow learning model to improve geovisual analytics utility.
Roth, Robert E; Maceachren, Alan M; McCabe, Craig A
2009-01-01
INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on
Effects of random noise in a dynamical model of love
Energy Technology Data Exchange (ETDEWEB)
Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2011-07-15
Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.
Effects of random noise in a dynamical model of love
International Nuclear Information System (INIS)
Xu Yong; Gu Rencai; Zhang Huiqing
2011-01-01
Highlights: → We model the complexity and unpredictability of psychology as Gaussian white noise. → The stochastic system of love is considered including bifurcation and chaos. → We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.
Using Random Forest Models to Predict Organizational Violence
Levine, Burton; Bobashev, Georgly
2012-01-01
We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"
International Nuclear Information System (INIS)
Horrocks, D.L.
1980-01-01
A method and apparatus for the reliable determination of a random coincidence count attributable to chance coincidences of single-photon events which are each detected in only a single detector of a scintillation counter utilizing two detectors in a coincidence counting technique are described. A firstdelay device is employed to delay output pulses from one detector, and then the delayed signal is compared with the undelayed signal from the other detector in a coincidence circuit, to obtain an approximate random coincidence count. The output of the coincidence circuit is applied to an anti-coincidence circuit, where it is corrected by elimination of pulses coincident with, and attributable to, conventionally detected real coincidences, and by elimination of pulses coincident with, and attributable to, real coincidences that have been delayed by a second delay device having the same time parameter as the first. 8 claims
Economic analysis of open space box model utilization in spacecraft
Mohammad, Atif F.; Straub, Jeremy
2015-05-01
It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.
Modeling a Packed Bed Reactor Utilizing the Sabatier Process
Shah, Malay G.; Meier, Anne J.; Hintze, Paul E.
2017-01-01
A numerical model is being developed using Python which characterizes the conversion and temperature profiles of a packed bed reactor (PBR) that utilizes the Sabatier process; the reaction produces methane and water from carbon dioxide and hydrogen. While the specific kinetics of the Sabatier reaction on the RuAl2O3 catalyst pellets are unknown, an empirical reaction rate equation1 is used for the overall reaction. As this reaction is highly exothermic, proper thermal control is of the utmost importance to ensure maximum conversion and to avoid reactor runaway. It is therefore necessary to determine what wall temperature profile will ensure safe and efficient operation of the reactor. This wall temperature will be maintained by active thermal controls on the outer surface of the reactor. Two cylindrical PBRs are currently being tested experimentally and will be used for validation of the Python model. They are similar in design except one of them is larger and incorporates a preheat loop by feeding the reactant gas through a pipe along the center of the catalyst bed. The further complexity of adding a preheat pipe to the model to mimic the larger reactor is yet to be implemented and validated; preliminary validation is done using the smaller PBR with no reactant preheating. When mapping experimental values of the wall temperature from the smaller PBR into the Python model, a good approximation of the total conversion and temperature profile has been achieved. A separate CFD model incorporates more complex three-dimensional effects by including the solid catalyst pellets within the domain. The goal is to improve the Python model to the point where the results of other reactor geometry can be reasonably predicted relatively quickly when compared to the much more computationally expensive CFD approach. Once a reactor size is narrowed down using the Python approach, CFD will be used to generate a more thorough prediction of the reactors performance.
Utility of Social Modeling for Proliferation Assessment - Preliminary Findings
International Nuclear Information System (INIS)
Coles, Garill A.; Gastelum, Zoe N.; Brothers, Alan J.; Thompson, Sandra E.
2009-01-01
Often the methodologies for assessing proliferation risk are focused around the inherent vulnerability of nuclear energy systems and associated safeguards. For example an accepted approach involves ways to measure the intrinsic and extrinsic barriers to potential proliferation. This paper describes preliminary investigation into non-traditional use of social and cultural information to improve proliferation assessment and advance the approach to assessing nuclear material diversion. Proliferation resistance assessment, safeguard assessments and related studies typically create technical information about the vulnerability of a nuclear energy system to diversion of nuclear material. The purpose of this research project is to find ways to integrate social information with technical information by explicitly considering the role of culture, groups and/or individuals to factors that impact the possibility of proliferation. When final, this work is expected to describe and demonstrate the utility of social science modeling in proliferation and proliferation risk assessments.
Factorisations for partition functions of random Hermitian matrix models
International Nuclear Information System (INIS)
Jackson, D.M.; Visentin, T.I.
1996-01-01
The partition function Z N , for Hermitian-complex matrix models can be expressed as an explicit integral over R N , where N is a positive integer. Such an integral also occurs in connection with random surfaces and models of two dimensional quantum gravity. We show that Z N can be expressed as the product of two partition functions, evaluated at translated arguments, for another model, giving an explicit connection between the two models. We also give an alternative computation of the partition function for the φ 4 -model.The approach is an algebraic one and holds for the functions regarded as formal power series in the appropriate ring. (orig.)
A mangrove creek restoration plan utilizing hydraulic modeling.
Marois, Darryl E; Mitsch, William J
2017-11-01
Despite the valuable ecosystem services provided by mangrove ecosystems they remain threatened around the globe. Urban development has been a primary cause for mangrove destruction and deterioration in south Florida USA for the last several decades. As a result, the restoration of mangrove forests has become an important topic of research. Using field sampling and remote-sensing we assessed the past and present hydrologic conditions of a mangrove creek and its connected mangrove forest and brackish marsh systems located on the coast of Naples Bay in southwest Florida. We concluded that the hydrology of these connected systems had been significantly altered from its natural state due to urban development. We propose here a mangrove creek restoration plan that would extend the existing creek channel 1.1 km inland through the adjacent mangrove forest and up to an adjacent brackish marsh. We then tested the hydrologic implications using a hydraulic model of the mangrove creek calibrated with tidal data from Naples Bay and water levels measured within the creek. The calibrated model was then used to simulate the resulting hydrology of our proposed restoration plan. Simulation results showed that the proposed creek extension would restore a twice-daily flooding regime to a majority of the adjacent mangrove forest and that there would still be minimal tidal influence on the brackish marsh area, keeping its salinity at an acceptable level. This study demonstrates the utility of combining field data and hydraulic modeling to aid in the design of mangrove restoration plans.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Statistical shape model with random walks for inner ear segmentation
DEFF Research Database (Denmark)
Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma
2016-01-01
is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...
Asthma Self-Management Model: Randomized Controlled Trial
Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo
2016-01-01
Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…
The dilute random field Ising model by finite cluster approximation
International Nuclear Information System (INIS)
Benyoussef, A.; Saber, M.
1987-09-01
Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs
International Nuclear Information System (INIS)
Bachschmid-Romano, Ludovica; Opper, Manfred
2015-01-01
We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)
Evolution of the concentration PDF in random environments modeled by global random walk
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and
Quantum random oracle model for quantum digital signature
Shang, Tao; Lei, Qi; Liu, Jianwei
2016-10-01
The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.
Investigating Facebook Groups through a Random Graph Model
Dinithi Pallegedara; Lei Pan
2014-01-01
Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...
Sfontouris, Ioannis A; Kolibianakis, Efstratios M; Lainas, George T; Venetis, Christos A; Petsas, George K; Tarlatzis, Basil C; Lainas, Tryfon G
2017-10-01
The aim of this study is to determine whether blastocyst utilization rates are different after continuous culture in two different commercial single-step media. This is a paired randomized controlled trial with sibling oocytes conducted in infertility patients, aged ≤40 years with ≥10 oocytes retrieved assigned to blastocyst culture and transfer. Retrieved oocytes were randomly allocated to continuous culture in either Sage one-step medium (Origio) or Continuous Single Culture (CSC) medium (Irvine Scientific) without medium renewal up to day 5 post oocyte retrieval. Main outcome measure was the proportion of embryos suitable for clinical use (utilization rate). A total of 502 oocytes from 33 women were randomly allocated to continuous culture in either Sage one-step medium (n = 250) or CSC medium (n = 252). Fertilization was performed by either in vitro fertilization or intracytoplasmic sperm injection, and embryo transfers were performed on day 5. Two patients had all blastocysts frozen due to the occurrence of severe ovarian hyperstimulation syndrome. Fertilization and cleavage rates, as well as embryo quality on day 3, were similar in the two media. Blastocyst utilization rates (%, 95% CI) [55.4% (46.4-64.1) vs 54.7% (44.9-64.6), p = 0.717], blastocyst formation rates [53.6% (44.6-62.5) vs 51.9 (42.2-61.6), p = 0.755], and proportion of good quality blastocysts [36.8% (28.1-45.4) vs 36.1% (27.2-45.0), p = 0.850] were similar in Sage one-step and CSC media, respectively. Continuous culture of embryos in Sage one-step and CSC media is associated with similar blastocyst development and utilization rates. Both single-step media appear to provide adequate support during in vitro preimplantation embryo development. Whether these observations are also valid for other continuous single medium protocols remains to be determined. NCT02302638.
Stochastic geometry, spatial statistics and random fields models and algorithms
2015-01-01
Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.
Simulating intrafraction prostate motion with a random walk model.
Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O
2017-01-01
Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during
Modeling of chromosome intermingling by partially overlapping uniform random polygons.
Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J
2011-03-01
During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.
A generalized model via random walks for information filtering
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-08-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.
Creating, generating and comparing random network models with NetworkRandomizer.
Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni
2016-01-01
Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
Scaling of coercivity in a 3d random anisotropy model
Energy Technology Data Exchange (ETDEWEB)
Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.
2015-06-15
The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.
Modeling random combustion of lycopodium particles and gas
Directory of Open Access Journals (Sweden)
M Bidabadi
2016-06-01
Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.
Emergent randomness in the Jaynes-Cummings model
International Nuclear Information System (INIS)
Garraway, B M; Stenholm, S
2008-01-01
We consider the well-known Jaynes-Cummings model and ask if it can display randomness. As a solvable Hamiltonian system, it does not display chaotic behaviour in the ordinary sense. Here, however, we look at the distribution of values taken up during the total time evolution. This evolution is determined by the eigenvalues distributed as the square roots of integers and leads to a seemingly erratic behaviour. That this may display a random Gaussian value distribution is suggested by an exactly provable result by Kac. In order to reach our conclusion we use the Kac model to develop tests for the emergence of a Gaussian. Even if the consequent double limits are difficult to evaluate numerically, we find definite indications that the Jaynes-Cummings case also produces a randomness in its value distributions. Numerical methods do not establish such a result beyond doubt, but our conclusions are definite enough to suggest strongly an unexpected randomness emerging in a dynamic time evolution
A Fay-Herriot Model with Different Random Effect Variances
Czech Academy of Sciences Publication Activity Database
Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.
2011-01-01
Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf
Directory of Open Access Journals (Sweden)
Daniel König
2016-06-01
Full Text Available (1 Objective: To compare the effects of isomaltulose (Palatinose™, PSE vs. maltodextrin (MDX ingestion on substrate utilization during endurance exercise and subsequent time trial performance; (2 Methods: 20 male athletes performed two experimental trials with ingestion of either 75 g PSE or MDX 45 min before the start of exercise. The exercise protocol consisted of 90 min cycling (60% VO2max followed by a time trial; (3 Results: Time trial finishing time (−2.7%, 90% CI: ±3.0%, 89% likely beneficial; p = 0.147 and power output during the final 5 min (+4.6%, 90% CI: ±4.0%, 93% likely beneficial; p = 0.053 were improved with PSE compared with MDX. The blood glucose profile differed between trials (p = 0.013 with PSE resulting in lower glycemia during rest (95%–99% likelihood and higher blood glucose concentrations during exercise (63%–86% likelihood. In comparison to MDX, fat oxidation was higher (88%–99% likelihood; p = 0.005 and carbohydrate oxidation was lower following PSE intake (85%–96% likelihood; p = 0.002. (4 Conclusion: PSE maintained a more stable blood glucose profile and higher fat oxidation during exercise which resulted in improved cycling performance compared with MDX. These results could be explained by the slower availability and the low-glycemic properties of Palatinose™ allowing a greater reliance on fat oxidation and sparing of glycogen during the initial endurance exercise.
Random-growth urban model with geographical fitness
Kii, Masanobu; Akimoto, Keigo; Doi, Kenji
2012-12-01
This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.
Least squares estimation in a simple random coefficient autoregressive model
DEFF Research Database (Denmark)
Johansen, S; Lange, T
2013-01-01
The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...
The transverse spin-1 Ising model with random interactions
Energy Technology Data Exchange (ETDEWEB)
Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)
2009-01-15
The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.
Random unitary evolution model of quantum Darwinism with pure decoherence
Balanesković, Nenad
2015-10-01
We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.
Gravitational lensing by eigenvalue distributions of random matrix models
Martínez Alonso, Luis; Medina, Elena
2018-05-01
We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.
Random resistor network model of minimal conductivity in graphene.
Cheianov, Vadim V; Fal'ko, Vladimir I; Altshuler, Boris L; Aleiner, Igor L
2007-10-26
Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.
Levy Random Bridges and the Modelling of Financial Information
Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea
2009-01-01
The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...
Social aggregation in pea aphids: experiment and random walk modeling.
Directory of Open Access Journals (Sweden)
Christa Nilsen
Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.
A Predictive Analysis of the Department of Defense Distribution System Utilizing Random Forests
2016-06-01
resources capable of meeting both customer and individual resource constraints and goals while also maximizing the global benefit to the supply...and probability rules to determine the optimal red wine distribution network for an Italian-based wine producer. The decision support model for...combinations of factors that will result in delivery of the highest quality wines . The model’s first stage inputs basic logistics information to look
Analysis of Business Connections Utilizing Theory of Topology of Random Graphs
Trelewicz, Jennifer Q.; Volovich, Igor V.
2006-03-01
A business ecosystem is a system that describes interactions between organizations. In this paper, we build a theoretical framework that defines a model which can be used to analyze the business ecosystem. The basic concepts within the framework are organizations, business connections, and market, that are all defined in the paper. Many researchers analyze the performance and structure of business using the workflow of the business. Our work in business connections answers a different set of questions, concerning the monetary value in the business ecosystem, rather than the task-interaction view that is provided by workflow analysis. We apply methods for analysis of the topology of complex networks, characterized by the concepts of small path length, clustering, and scale-free degree distributions. To model the dynamics of the business ecosystem we analyze the notion of the state of an organization at a given instant of time. We point out that the notion of state in this case is fundamentally different from the concept of state of the system which is used in classical or quantum physics. To describe the state of the organization at a given time one has to know the probability of payments to contracts which in fact depend on the future behavior of the agents on the market. Therefore methods of p-adic analysis are appropriate to explore such a behavior. Microeconomic and macroeconomic factors are indivisible and moreover the actual state of the organization depends on the future. In this framework some simple models are analyzed in detail. Company strategy can be influenced by analysis of models, which can provide a probabilistic understanding of the market, giving degrees of predictability.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
High-temperature series expansions for random Potts models
Directory of Open Access Journals (Sweden)
M.Hellmund
2005-01-01
Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.
On a Stochastic Failure Model under Random Shocks
Cha, Ji Hwan
2013-02-01
In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Discrete random walk models for space-time fractional diffusion
International Nuclear Information System (INIS)
Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo
2002-01-01
A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation
Random regret-based discrete-choice modelling: an application to healthcare.
de Bekker-Grob, Esther W; Chorus, Caspar G
2013-07-01
A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.
Random matrices and the six-vertex model
Bleher, Pavel
2013-01-01
This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
Universality of correlation functions in random matrix models of QCD
International Nuclear Information System (INIS)
Jackson, A.D.; Sener, M.K.; Verbaarschot, J.J.M.
1997-01-01
We demonstrate the universality of the spectral correlation functions of a QCD inspired random matrix model that consists of a random part having the chiral structure of the QCD Dirac operator and a deterministic part which describes a schematic temperature dependence. We calculate the correlation functions analytically using the technique of Itzykson-Zuber integrals for arbitrary complex supermatrices. An alternative exact calculation for arbitrary matrix size is given for the special case of zero temperature, and we reproduce the well-known Laguerre kernel. At finite temperature, the microscopic limit of the correlation functions are calculated in the saddle-point approximation. The main result of this paper is that the microscopic universality of correlation functions is maintained even though unitary invariance is broken by the addition of a deterministic matrix to the ensemble. (orig.)
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Prediction of Geological Subsurfaces Based on Gaussian Random Field Models
Energy Technology Data Exchange (ETDEWEB)
Abrahamsen, Petter
1997-12-31
During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Statistical Downscaling of Temperature with the Random Forest Model
Directory of Open Access Journals (Sweden)
Bo Pang
2017-01-01
Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.
Randomizing growing networks with a time-respecting null model
Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš
2018-05-01
Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.
Genetic evaluation of European quails by random regression models
Directory of Open Access Journals (Sweden)
Flaviana Miranda Gonçalves
2012-09-01
Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.
User Utility Oriented Queuing Model for Resource Allocation in Cloud Environment
Directory of Open Access Journals (Sweden)
Zhe Zhang
2015-01-01
Full Text Available Resource allocation is one of the most important research topics in servers. In the cloud environment, there are massive hardware resources of different kinds, and many kinds of services are usually run on virtual machines of the cloud server. In addition, cloud environment is commercialized, and economical factor should also be considered. In order to deal with commercialization and virtualization of cloud environment, we proposed a user utility oriented queuing model for task scheduling. Firstly, we modeled task scheduling in cloud environment as an M/M/1 queuing system. Secondly, we classified the utility into time utility and cost utility and built a linear programming model to maximize total utility for both of them. Finally, we proposed a utility oriented algorithm to maximize the total utility. Massive experiments validate the effectiveness of our proposed model.
The Sustainable Energy Utility (SEU) Model for Energy Service Delivery
Houck, Jason; Rickerson, Wilson
2009-01-01
Climate change, energy price spikes, and concerns about energy security have reignited interest in state and local efforts to promote end-use energy efficiency, customer-sited renewable energy, and energy conservation. Government agencies and utilities have historically designed and administered such demand-side measures, but innovative…
Business model innovation for sustainable energy: German utilities and renewable energy
International Nuclear Information System (INIS)
Richter, Mario
2013-01-01
The electric power sector stands at the beginning of a fundamental transformation process towards a more sustainable production based on renewable energies. Consequently, electric utilities as incumbent actors face a massive challenge to find new ways of creating, delivering, and capturing value from renewable energy technologies. This study investigates utilities' business models for renewable energies by analyzing two generic business models based on a series of in-depth interviews with German utility managers. It is found that utilities have developed viable business models for large-scale utility-side renewable energy generation. At the same time, utilities lack adequate business models to commercialize small-scale customer-side renewable energy technologies. By combining the business model concept with innovation and organization theory practical recommendations for utility mangers and policy makers are derived. - Highlights: • The energy transition creates a fundamental business model challenge for utilities. • German utilities succeed in large-scale and fail in small-scale renewable generation. • Experiences from other industries are available to inform utility managers. • Business model innovation capabilities will be crucial to master the energy transition
The utility of Earth system Models of Intermediate Complexity
Weber, S.L.
2010-01-01
Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by
Zero temperature landscape of the random sine-Gordon model
International Nuclear Information System (INIS)
Sanchez, A.; Bishop, A.R.; Cai, D.
1997-01-01
We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length (∼ 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r) 2 type up to a certain correlation length (ξ ∼ 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis
Exponential random graph models for networks with community structure.
Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian
2013-09-01
Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.
The Little-Hopfield model on a sparse random graph
International Nuclear Information System (INIS)
Castillo, I Perez; Skantzos, N S
2004-01-01
We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model). We solve this model within replica symmetry, and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetic transition lines of our phase diagram are identical to those of sequential dynamics. The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement
Pedestrian Walking Behavior Revealed through a Random Walk Model
Directory of Open Access Journals (Sweden)
Hui Xiong
2012-01-01
Full Text Available This paper applies method of continuous-time random walks for pedestrian flow simulation. In the model, pedestrians can walk forward or backward and turn left or right if there is no block. Velocities of pedestrian flow moving forward or diffusing are dominated by coefficients. The waiting time preceding each jump is assumed to follow an exponential distribution. To solve the model, a second-order two-dimensional partial differential equation, a high-order compact scheme with the alternating direction implicit method, is employed. In the numerical experiments, the walking domain of the first one is two-dimensional with two entrances and one exit, and that of the second one is two-dimensional with one entrance and one exit. The flows in both scenarios are one way. Numerical results show that the model can be used for pedestrian flow simulation.
Random isotropic one-dimensional XY-model
Gonçalves, L. L.; Vieira, A. P.
1998-01-01
The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .
The random cluster model and a new integration identity
International Nuclear Information System (INIS)
Chen, L C; Wu, F Y
2005-01-01
We evaluate the free energy of the random cluster model at its critical point for 0 -1 (√q/2) is a rational number. As a by-product, our consideration leads to a closed-form evaluation of the integral 1/(4π 2 ) ∫ 0 2π dΘ ∫ 0 2π dΦ ln[A+B+C - AcosΘ - BcosΦ - Ccos(Θ+Φ)] = -ln(2S) + (2/π)[Ti 2 (AS) + Ti 2 (BS) + Ti 2 (CS)], which arises in lattice statistics, where A, B, C ≥ 0 and S=1/√(AB + BC + CA)
Universality in random-walk models with birth and death
International Nuclear Information System (INIS)
Bender, C.M.; Boettcher, S.; Meisinger, P.N.
1995-01-01
Models of random walks are considered in which walkers are born at one site and die at all other sites. Steady-state distributions of walkers exhibit dimensionally dependent critical behavior as a function of the birth rate. Exact analytical results for a hyperspherical lattice yield a second-order phase transition with a nontrivial critical exponent for all positive dimensions D≠2, 4. Numerical studies of hypercubic and fractal lattices indicate that these exact results are universal. This work elucidates the adsorption transition of polymers at curved interfaces. copyright 1995 The American Physical Society
Permeability of model porous medium formed by random discs
Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.
2018-03-01
Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.
Interpreting parameters in the logistic regression model with random effects
DEFF Research Database (Denmark)
Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben
2000-01-01
interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...
A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models
Energy Technology Data Exchange (ETDEWEB)
Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-06-01
In this report, we will present a descriptive and organizational framework for incremental and fundamental changes to regulatory and utility business models in the context of clean energy public policy goals. We will also discuss the regulated utility's role in providing value-added services that relate to distributed energy resources, identify the "openness" of customer information and utility networks necessary to facilitate change, and discuss the relative risks, and the shifting of risks, for utilities and customers.
Geometric Models for Isotropic Random Porous Media: A Review
Directory of Open Access Journals (Sweden)
Helmut Hermann
2014-01-01
Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.
Directory of Open Access Journals (Sweden)
Gao Shouguo
2011-08-01
Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.
Degnan, Andrew J; Hemingway, Jennifer; Hughes, Danny R
2017-08-01
Vertebral fractures have a substantial impact on the health and quality of life of elderly individuals as one of the most common complications of osteoporosis. Vertebral augmentation procedures including vertebroplasty and kyphoplasty have been supported as means of reducing pain and mitigating disability associated with these fractures. However, use of vertebroplasty is debated, with negative randomized controlled trials published in 2009 and divergent clinical guidelines. The effect of changing evidence and guidelines on different practitioners' utilization of both kyphoplasty and vertebroplasty in the years after these developments and publication of data supporting their use is poorly understood. Using national aggregate Medicare claims data from 2002 through 2014, vertebroplasty and kyphoplasty procedures were identified by provider type. Changes in utilization by procedure type and provider were studied. Total vertebroplasty billing increased 101.6% from 2001 (18,911) through 2008 (38,123). Total kyphoplasty billing frequency increased 17.2% from 2006 (54,329) through 2008 (63,684). Vertebroplasty billing decreased 60.9% from 2008 through 2014 to its lowest value (14,898). Kyphoplasty billing decreased 8.4% from 2008 (63,684) through 2010 (58,346), but then increased 7.6% from 2010 to 2013 (62,804). Vertebroplasty billing decreased substantially beginning in 2009 and continued to decrease through 2014 despite publication of more favorable studies in 2010 to 2012, suggesting studies published in 2009 and AAOS guidelines in 2010 may have had a persistent negative effect. Kyphoplasty did not decrease as substantially and increased in more recent years, suggesting a clinical practice response to favorable studies published during this period. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Modeling energy flexibility of low energy buildings utilizing thermal mass
DEFF Research Database (Denmark)
Foteinaki, Kyriaki; Heller, Alfred; Rode, Carsten
2016-01-01
In the future energy system a considerable increase in the penetration of renewable energy is expected, challenging the stability of the system, as both production and consumption will have fluctuating patterns. Hence, the concept of energy flexibility will be necessary in order for the consumption...... to match the production patterns, shifting demand from on-peak hours to off-peak hours. Buildings could act as flexibility suppliers to the energy system, through load shifting potential, provided that the large thermal mass of the building stock could be utilized for energy storage. In the present study...... the load shifting potential of an apartment of a low energy building in Copenhagen is assessed, utilizing the heat storage capacity of the thermal mass when the heating system is switched off for relieving the energy system. It is shown that when using a 4-hour preheating period before switching off...
Biswas, Rakesh; Maniam, Jayanthy; Lee, Edwin Wen Huo; Gopal, Premalatha; Umakanth, Shashikiran; Dahiya, Sumit; Ahmed, Sayeed
2008-10-01
The hypothesis in the conceptual model was that a user-driven innovation in presently available information and communication technology infrastructure would be able to meet patient and health professional users information needs and help them attain better health outcomes. An operational model was created to plan a trial on a sample diabetic population utilizing a randomized control trial design, assigning one randomly selected group of diabetics to receive electronic information intervention and analyse if it would improve their health outcomes in comparison with a matched diabetic population who would only receive regular medical intervention. Diabetes was chosen for this particular trial, as it is a major chronic illness in Malaysia as elsewhere in the world. It is in essence a position paper for how the study concept should be organized to stimulate wider discussion prior to beginning the study.
Gaussian random bridges and a geometric model for information equilibrium
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
Kinetic models of cell growth, substrate utilization and bio ...
African Journals Online (AJOL)
STORAGESEVER
2008-05-02
May 2, 2008 ... Aspergillus fumigatus. A simple model was proposed using the Logistic Equation for the growth, ... costs and also involved in less sophisticated fermentation ... apply and they are accurately proved that the model can express ...
Cost-utility model of rasagiline in the treatment of advanced Parkinson's disease in Finland.
Hudry, Joumana; Rinne, Juha O; Keränen, Tapani; Eckert, Laurent; Cochran, John M
2006-04-01
The economic burden of Parkinson's disease (PD) is high, especially in patients experiencing motor fluctuations. Rasagiline has demonstrated efficacy against symptoms of PD in early and advanced stages of the disease. To assess the cost-utility of rasagiline and entacapone as adjunctive therapies to levodopa versus standard levodopa care in PD patients with motor fluctuations in Finland. A 2 year probabilistic Markov model with 3 health states: "25% or less off-time/day," "greater than 25% off-time/day," and "dead" was used. Off-time represents time awake with poor or absent motor function. Model inputs included transition probabilities from randomized clinical trials, utilities from a preference measurement study, and costs and resources from a Finnish cost-of-illness study. Effectiveness measures were quality-adjusted life years (QALYs) and number of months spent with 25% or less off-time/day. Uncertainty around parameters was taken into account by Monte Carlo simulations. Over 2 years from a societal perspective, rasagiline or entacapone as adjunctive therapies to levodopa showed greater effectiveness than levodopa alone at no additional costs. Benefits after 2 years were 0.13 (95% CI 0.08 to 0.17) additional QALYs and 5.2 (3.6 to 6.7) additional months for rasagiline and 0.12 (0.08 to 0.17) QALYs and 5.1 (3.5 to 6.6) months for entacapone, both in adjunct to levodopa compared with levodopa alone. The results of this study support the use of rasagiline and entacapone as adjunctive cost-effective alternatives to levodopa alone in PD patients with motor fluctuations in Finland. With a different mode of action, rasagiline is a valuable therapeutic alternative to entacapone at no additional charge to society.
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This
Maintenance overtime policies in reliability theory models with random working cycles
Nakagawa, Toshio
2015-01-01
This book introduces a new concept of replacement in maintenance and reliability theory. Replacement overtime, where replacement occurs at the first completion of a working cycle over a planned time, is a new research topic in maintenance theory and also serves to provide a fresh optimization technique in reliability engineering. In comparing replacement overtime with standard and random replacement techniques theoretically and numerically, 'Maintenance Overtime Policies in Reliability Theory' highlights the key benefits to be gained by adopting this new approach and shows how they can be applied to inspection policies, parallel systems and cumulative damage models. Utilizing the latest research in replacement overtime by internationally recognized experts, readers are introduced to new topics and methods, and learn how to practically apply this knowledge to actual reliability models. This book will serve as an essential guide to a new subject of study for graduate students and researchers and also provides a...
Joint modeling of ChIP-seq data via a Markov random field model
Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C
Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for
Bayesian Hierarchical Random Effects Models in Forensic Science
Directory of Open Access Journals (Sweden)
Colin G. G. Aitken
2018-04-01
Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Bayesian Hierarchical Random Effects Models in Forensic Science.
Aitken, Colin G G
2018-01-01
Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Percolation for a model of statistically inhomogeneous random media
International Nuclear Information System (INIS)
Quintanilla, J.; Torquato, S.
1999-01-01
We study clustering and percolation phenomena for a model of statistically inhomogeneous two-phase random media, including functionally graded materials. This model consists of inhomogeneous fully penetrable (Poisson distributed) disks and can be constructed for any specified variation of volume fraction. We quantify the transition zone in the model, defined by the frontier of the cluster of disks which are connected to the disk-covered portion of the model, by defining the coastline function and correlation functions for the coastline. We find that the behavior of these functions becomes largely independent of the specific choice of grade in volume fraction as the separation of length scales becomes large. We also show that the correlation function behaves in a manner similar to that of fractal Brownian motion. Finally, we study fractal characteristics of the frontier itself and compare to similar properties for two-dimensional percolation on a lattice. In particular, we show that the average location of the frontier appears to be related to the percolation threshold for homogeneous fully penetrable disks. copyright 1999 American Institute of Physics
Expected utility without utility
Castagnoli, E.; Licalzi, M.
1996-01-01
This paper advances an interpretation of Von Neumann–Morgenstern’s expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered....
Recent advances in modeling nutrient utilization in ruminants1
Kebreab, E.; Dijkstra, J.; Bannink, A.; France, J.
2009-01-01
Mathematical modeling techniques have been applied to study various aspects of the ruminant, such as rumen function, post-absorptive metabolism and product composition. This review focuses on advances made in modeling rumen fermentation and its associated rumen disorders, and energy and nutrient
Electric power bidding model for practical utility system
Directory of Open Access Journals (Sweden)
M. Prabavathi
2018-03-01
Full Text Available A competitive open market environment has been created due to the restructuring in the electricity market. In the new competitive market, mostly a centrally operated pool with a power exchange has been introduced to meet the offers from the competing suppliers with the bids of the customers. In such an open access environment, the formation of bidding strategy is one of the most challenging and important tasks for electricity participants to maximize their profit. To build bidding strategies for power suppliers and consumers in the restructured electricity market, a new mathematical framework is proposed in this paper. It is assumed that each participant submits several blocks of real power quantities along with their bidding prices. The effectiveness of the proposed method is tested on Indian Utility-62 bus system and IEEE-118 bus system. Keywords: Bidding strategy, Day ahead electricity market, Market clearing price, Market clearing volume, Block bid, Intermediate value theorem
International Nuclear Information System (INIS)
Soederberg, Magnus
2008-01-01
The Swedish Electricity Act states that electricity distribution must comply with both price and quality requirements. In order to maintain efficient regulation it is necessary to firstly, define quality attributes and secondly, determine a customer's priorities concerning price and quality attributes. If distribution utilities gain an understanding of customer preferences and incentives for reporting them, the regulator can save a lot of time by surveying them rather than their customers. This study applies a choice modelling methodology where utilities and industrial customers are asked to evaluate the same twelve choice situations in which price and four specific quality attributes are varied. The preferences expressed by the utilities, and estimated by a random parameter logit, correspond quite well with the preferences expressed by the largest industrial customers. The preferences expressed by the utilities are reasonably homogenous in relation to forms of association (private limited, public and trading partnership). If the regulator acts according to the preferences expressed by the utilities, smaller industrial customers will have to pay for quality they have not asked for. (author)
Droplet localization in the random XXZ model and its manifestations
Elgart, A.; Klein, A.; Stolz, G.
2018-01-01
We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb-Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
Stochastic equilibria of an asset pricing model with heterogeneous beliefs and random dividends
Zhu, M.; Wang, D.; Guo, M.
2011-01-01
We investigate dynamical properties of a heterogeneous agent model with random dividends and further study the relationship between dynamical properties of the random model and those of the corresponding deterministic skeleton, which is obtained by setting the random dividends as their constant mean
Multiscale model of short cracks in a random polycrystalline aggregate
International Nuclear Information System (INIS)
Simonovski, I.; Cizelj, L.; Petric, Z.
2006-01-01
A plane-strain finite element crystal plasticity model of microstructurally small stationary crack emanating at a surface grain in a 316L stainless steel is proposed. The model consisting of 212 randomly shaped, sized and oriented grains is loaded monotonically in uniaxial tension to a maximum load of 1.12Rp0.2 (280MPa). The influence that a random grain structure imposes on a Stage I crack is assessed by calculating the crack tip opening (CTOD) and sliding displacements (CTSD) for single crystal as well as for polycrystal models, considering also different crystallographic orientations. In the single crystal case the CTOD and CTSD may differ by more than one order of magnitude. Near the crack tip slip is activated on all the slip planes whereby only two are active in the rest of the model. The maximum CTOD is directly related to the maximal Schmid factors. For the more complex polycrystal cases it is shown that certain crystallographic orientations result in a cluster of soft grains around the crack-containing grain. In these cases the crack tip can become a part of the localized strain, resulting in a large CTOD value. This effect, resulting from the overall grain orientations and sizes, can have a greater impact on the CTOD than the local grain orientation. On the other hand, when a localized soft response is formed away from the crack, the localized strain does not affect the crack tip directly, resulting in a small CTOD value. The resulting difference in CTOD can be up to a factor of 4, depending upon the crystallographic set. Grains as far as 6 times the value of crack length significantly influence that crack tip parameters. It was also found that a larger crack containing grain tends to increase the CTOD. Finally, smaller than expected drop in the CTOD (12.7%) was obtained as the crack approached the grain boundary. This could be due to the assumption of the unchanged crack direction, only monotonic loading and simplified grain boundary modelling. (author)
Directory of Open Access Journals (Sweden)
Wenzhi Wang
2016-07-01
Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.
Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models
Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...
Energy Technology Data Exchange (ETDEWEB)
Xu, Zhijie; Tartakovsky, Alexandre M.
2017-09-01
This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.
Measurement model choice influenced randomized controlled trial results.
Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos
2016-11-01
In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Evaluating the performance and utility of regional climate models
DEFF Research Database (Denmark)
Christensen, Jens H.; Carter, Timothy R.; Rummukainen, Markku
2007-01-01
This special issue of Climatic Change contains a series of research articles documenting co-ordinated work carried out within a 3-year European Union project 'Prediction of Regional scenarios and Uncertainties for Defining European Climate change risks and Effects' (PRUDENCE). The main objective...... of the PRUDENCE project was to provide high resolution climate change scenarios for Europe at the end of the twenty-first century by means of dynamical downscaling (regional climate modelling) of global climate simulations. The first part of the issue comprises seven overarching PRUDENCE papers on: (1) the design...... of the model simulations and analyses of climate model performance, (2 and 3) evaluation and intercomparison of simulated climate changes, (4 and 5) specialised analyses of impacts on water resources and on other sectors including agriculture, ecosystems, energy, and transport, (6) investigation of extreme...
On the Utility of Island Models in Dynamic Optimization
DEFF Research Database (Denmark)
Lissovoi, Andrei; Witt, Carsten
2015-01-01
A simple island model with λ islands and migration occurring after every τ iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (1+λ) EA if τ=1, i.e., migration occurs during every iteration. It is proved that even for an increased offspring population size up...... to λ=O(n1-ε), the (1+λ) EA is still not able to track the optimum of Maze. If the migration interval is increased, the algorithm is able to track the optimum even for logarithmic λ. Finally, the relationship of τ, λ, and the ability of the island model to track the optimum is investigated more closely....
Jeong, Chan-Seok; Kim, Dongsup
2016-02-24
Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.
Recursive inter-generational utility in global climate risk modeling
Energy Technology Data Exchange (ETDEWEB)
Minh, Ha-Duong [Centre International de Recherche sur l' Environnement et le Developpement (CIRED-CNRS), 75 - Paris (France); Treich, N. [Institut National de Recherches Agronomiques (INRA-LEERNA), 31 - Toulouse (France)
2003-07-01
This paper distinguishes relative risk aversion and resistance to inter-temporal substitution in climate risk modeling. Stochastic recursive preferences are introduced in a stylized numeric climate-economy model using preliminary IPCC 1998 scenarios. It shows that higher risk aversion increases the optimal carbon tax. Higher resistance to inter-temporal substitution alone has the same effect as increasing the discount rate, provided that the risk is not too large. We discuss implications of these findings for the debate upon discounting and sustainability under uncertainty. (author)
Utilization-Based Modeling and Optimization for Cognitive Radio Networks
Liu, Yanbing; Huang, Jun; Liu, Zhangxiong
The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.
User-owned utility models for rural electrification
Energy Technology Data Exchange (ETDEWEB)
Waddle, D.
1997-12-01
The author discusses the history of rural electric cooperatives (REC) in the United States, and the broader question of whether such organizations can serve as a model for rural electrification in other countries. The author points out the features of such cooperatives which have given them stability and strength, and emphasizes that for success of such programs, many of these same features must be present. He definitely feels the cooperative models are not outdated, but they need strong local support, and a governmental structure which is supportive, and in particular not negative.
Modeling Resource Utilization of a Large Data Acquisition System
AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger
2017-01-01
The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...
Modelling Resource Utilization of a Large Data Acquisition System
Santos, Alejandro; The ATLAS collaboration
2017-01-01
The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...
Household time allocation model based on a group utility function
Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.
2002-01-01
Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data
Expected utility and catastrophic risk in a stochastic economy-climate model
Energy Technology Data Exchange (ETDEWEB)
Ikefuji, M. [Institute of Social and Economic Research, Osaka University, Osaka (Japan); Laeven, R.J.A.; Magnus, J.R. [Department of Econometrics and Operations Research, Tilburg University, Tilburg (Netherlands); Muris, C. [CentER, Tilburg University, Tilburg (Netherlands)
2010-11-15
In the context of extreme climate change, we ask how to conduct expected utility analysis in the presence of catastrophic risks. Economists typically model decision making under risk and uncertainty by expected utility with constant relative risk aversion (power utility); statisticians typically model economic catastrophes by probability distributions with heavy tails. Unfortunately, the expected utility framework is fragile with respect to heavy-tailed distributional assumptions. We specify a stochastic economy-climate model with power utility and explicitly demonstrate this fragility. We derive necessary and sufficient compatibility conditions on the utility function to avoid fragility and solve our stochastic economy-climate model for two examples of such compatible utility functions. We further develop and implement a procedure to learn the input parameters of our model and show that the model thus specified produces quite robust optimal policies. The numerical results indicate that higher levels of uncertainty (heavier tails) lead to less abatement and consumption, and to more investment, but this effect is not unlimited.
Models for randomly distributed nanoscopic domains on spherical vesicles
Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John
2018-06-01
The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.
Asset transformation and the challenges to servitize a utility business model
International Nuclear Information System (INIS)
Helms, Thorsten
2016-01-01
The traditional energy utility business model is under pressure, and energy services are expected to play an important role for the energy transition. Experts and scholars argue that utilities need to innovate their business models, and transform from commodity suppliers to service providers. The transition from a product-oriented, capital-intensive business model based on tangible assets, towards a service-oriented, expense-intensive business model based on intangible assets may present great managerial and organizational challenges. Little research exists about such transitions for capital-intensive commodity providers, and particularly energy utilities, where the challenges to servitize are expected to be greatest. This qualitative paper explores the barriers to servitization within selected Swiss and German utility companies through a series of interviews with utility managers. One of them is ‘asset transformation’, the shift from tangible to intangible assets as major input factor for the value proposition, which is proposed as a driver for the complexity of business model transitions. Managers need to carefully manage those challenges, and find ways to operate both new service and established utility business models aside. Policy makers can support the transition of utilities through more favorable regulatory frameworks for energy services, and by supporting the exchange of knowledge in the industry. - Highlights: •The paper analyses the expected transformation of utilities into service-providers. •Service and utility business models possess very different attributes. •The former is based on intangible, the latter on tangible assets. •The transformation into a service-provider is related with great challenges. •Asset transformation is proposed as a barrier for business model innovation.
ASPEN Plus based simulation models have been developed to design a pyrolysis process for the on-site production and utilization of pyrolysis oil from equine waste at the Equine Rehabilitation Center at Morrisville State College (MSC). The results indicate that utilization of all available Equine Reh...
Decision modelling tools for utilities in the deregulated energy market
Energy Technology Data Exchange (ETDEWEB)
Makkonen, S. [Process Vision Oy, Helsinki (Finland)
2005-07-01
This thesis examines the impact of the deregulation of the energy market on decision making and optimisation in utilities and demonstrates how decision support applications can solve specific encountered tasks in this context. The themes of the thesis are presented in different frameworks in order to clarify the complex decision making and optimisation environment where new sources of uncertainties arise due to the convergence of energy markets, globalisation of energy business and increasing competition. This thesis reflects the changes in the decision making and planning environment of European energy companies during the period from 1995 to 2004. It also follows the development of computational performance and evolution of energy information systems during the same period. Specifically, this thesis consists of studies at several levels of the decision making hierarchy ranging from top-level strategic decision problems to specific optimisation algorithms. On the other hand, the studies also follow the progress of the liberalised energy market from the monopolistic era to the fully competitive market with new trading instruments and issues like emissions trading. This thesis suggests that there is an increasing need for optimisation and multiple criteria decision making methods, and that new approaches based on the use of operations research are welcome as the deregulation proceeds and uncertainties increase. Technically, the optimisation applications presented are based on Lagrangian relaxation techniques and the dedicated Power Simplex algorithm supplemented with stochastic scenario analysis for decision support, a heuristic method to allocate common benefits and potential losses of coalitions of power companies, and an advanced Branch- and-Bound algorithm to solve efficiently nonconvex optimisation problems. The optimisation problems are part of the operational and tactical decision making process that has become very complex in the recent years. Similarly
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J
2018-01-01
Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.
Premium Pricing of Liability Insurance Using Random Sum Model
Kartikasari, Mujiati Dwi
2017-01-01
Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we ...
Utilization of FEM model for steel microstructure determination
Kešner, A.; Chotěborský, R.; Linda, M.; Hromasová, M.
2018-02-01
Agricultural tools which are used in soil processing, they are worn by abrasive wear mechanism cases by hard minerals particles in the soil. The wear rate is influenced by mechanical characterization of tools material and wear rate is influenced also by soil mineral particle contents. Mechanical properties of steel can be affected by a technology of heat treatment that it leads to a different microstructures. Experimental work how to do it is very expensive and thanks to numerical methods like FEM we can assumed microstructure at low cost but each of numerical model is necessary to be verified. The aim of this work has shown a procedure of prediction microstructure of steel for agricultural tools. The material characterizations of 51CrV4 grade steel were used for numerical simulation like TTT diagram, heat capacity, heat conduction and other physical properties of material. A relationship between predicted microstructure by FEM and real microstructure after heat treatment shows a good correlation.
Viable business models for public utilities; Zukunftsfaehige Geschaeftsmodelle fuer Stadtwerke
Energy Technology Data Exchange (ETDEWEB)
Gebhardt, Andreas; Weiss, Claudia [Buelow und Consorten GmbH, Hamburg (Germany)
2013-04-15
Small suppliers are faced with mounting pressures from an increasingly complex regulatory regime and a market that rewards size. Many have been able to adapt to the new framework conditions by successively optimizing existing activities. However, when change takes hold of all stages of the value chain it is no longer enough to merely modify one's previous strategies. It rather becomes necessary to review one's business model for its sustainability, take stock of the company's competencies and set priorities along the value chain. This is where a network-oriented focussing strategy can assist in ensuring efficient delivery of services in core areas while enabling the company to present itself on the market with a full range of services.
A customer satisfaction model for a utility service industry
Jamil, Jastini Mohd; Nawawi, Mohd Kamal Mohd; Ramli, Razamin
2016-08-01
This paper explores the effect of Image, Customer Expectation, Perceived Quality and Perceived Value on Customer Satisfaction, and to investigate the effect of Image and Customer Satisfaction on Customer Loyalty of mobile phone provider in Malaysia. The result of this research is based on data gathered online from international students in one of the public university in Malaysia. Partial Least Squares Structural Equation Modeling (PLS-SEM) has been used to analyze the data that have been collected from the international students' perceptions. The results found that Image and Perceived Quality have significant impact on Customer Satisfaction. Image and Customer Satisfaction ware also found to have significantly related to Customer Loyalty. However, no significant impact has been found between Customer Expectation with Customer Satisfaction, Perceived Value with Customer Satisfaction, and Customer Expectation with Perceived Value. We hope that the findings may assist the mobile phone provider in production and promotion of their services.
BWR Fuel Assemblies Physics Analysis Utilizing 3D MCNP Modeling
International Nuclear Information System (INIS)
Chiang, Ren-Tai; Williams, John B.; Folk, Ken S.
2008-01-01
MCNP is used to model a partially controlled BWR fresh fuel four assemblies (2x2) system for better understanding BWR fuel behavior and for benchmarking production codes. The impact of the GE14 plenum regions on axial power distribution is observed by comparing against the GE13 axial power distribution, in which the GE14 relative power is lower than the GE13 relative power at the 15. node and at the 16. node due to presence of the plenum regions in GE14 fuel in these two nodes. The segmented rod power distribution study indicates that the azimuthally dependent power distribution is very significant for the fuel rods next to the water gap in the uncontrolled portion. (authors)
BWR Fuel Assemblies Physics Analysis Utilizing 3D MCNP Modeling
Energy Technology Data Exchange (ETDEWEB)
Chiang, Ren-Tai [University of Florida, Gainesville, Florida 32611 (United States); Williams, John B.; Folk, Ken S. [Southern Nuclear Company, Birmingham, Alabama 35242 (United States)
2008-07-01
MCNP is used to model a partially controlled BWR fresh fuel four assemblies (2x2) system for better understanding BWR fuel behavior and for benchmarking production codes. The impact of the GE14 plenum regions on axial power distribution is observed by comparing against the GE13 axial power distribution, in which the GE14 relative power is lower than the GE13 relative power at the 15. node and at the 16. node due to presence of the plenum regions in GE14 fuel in these two nodes. The segmented rod power distribution study indicates that the azimuthally dependent power distribution is very significant for the fuel rods next to the water gap in the uncontrolled portion. (authors)
Computer model for estimating electric utility environmental noise
International Nuclear Information System (INIS)
Teplitzky, A.M.; Hahn, K.J.
1991-01-01
This paper reports on a computer code for estimating environmental noise emissions from the operation and the construction of electric power plants that was developed based on algorithms. The computer code (Model) is used to predict octave band sound power levels for power plant operation and construction activities on the basis of the equipment operating characteristics and calculates off-site sound levels for each noise source and for an entire plant. Estimated noise levels are presented either as A-weighted sound level contours around the power plant or as octave band levels at user defined receptor locations. Calculated sound levels can be compared with user designated noise criteria, and the program can assist the user in analyzing alternative noise control strategies
Utilizing Chamber Data for Developing and Validating Climate Change Models
Monje, Oscar
2012-01-01
Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.
Biomimetic peptide-based models of [FeFe]-hydrogenases: utilization of phosphine-containing peptides
Energy Technology Data Exchange (ETDEWEB)
Roy, Souvik [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Nguyen, Thuy-Ai D. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Gan, Lu [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA; Jones, Anne K. [Department of Chemistry and Biochemistry; Arizona State University; Tempe, USA
2015-01-01
Peptide based models for [FeFe]-hydrogenase were synthesized utilizing unnatural phosphine-amino acids and their electrocatalytic properties were investigated in mixed aqueous-organic solvents.
Directory of Open Access Journals (Sweden)
Xavier A. Harrison
2015-07-01
Full Text Available Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels, I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
Utilization of MatPIV program to different geotechnical models
Aklik, P.; Idinger, G.
2009-04-01
The Particle Imaging Velocimetry (PIV) technique is being used to measure soil displacements. PIV has been used for many years in fluid mechanics; but for physical modeling in geotechnical engineering, this technique is still relatively new. PIV is a worldwide growth in soil mechanics over the last decade owing to the developments in digital cameras and laser technologies. The use of PIV is feasible provided the surface contains sufficient texture. A Cambridge group has shown that natural sand contains enough texture for applying PIV. In a texture-based approach, the only requirement is for any patch, big or small to be sufficiently unique so that statistical tracking of this patch is possible. In this paper, some of the soil mechanic's models were investigated such as retaining walls, slope failures, and foundations. The photographs were taken with the help of the high resolution digital camera, the displacements of soils were evaluated with free software named as MatPIV and the displacement graphics between the two images were obtained. Nikon D60 digital camera is 10.2 MB and it has special properties which makes it possible to use in PIV applications. These special properties are Airflow Control System and Image Sensor cleaning for protection against dust, Active D-Lighting for highlighted or shadowy areas while shooting, advanced three-point AF system for fast, efficient and precise autofocus. Its fast and continuous shooting mode enables up to 100 JPEG images at three frames per second. Norm Sand (DIN 1164) was used for all the models in a glass rectangular box. For every experiment, MatPIV was used to calculate the velocities from the two images. MatPIV program was used in two ways such as easy way and difficult way: In the easy way, the two images with 64*64 pixels with 50% or 75% overlap of the interrogation windows were taken into consideration and the calculation was performed with a single iteration through the images and the result consisted of four
Chan, Randolph C. H.; Mak, Winnie W. S.; Pang, Ingrid H. Y.; Wong, Samuel Y. S.; Tang, Wai Kwong; Lau, Joseph T. F.; Woo, Jean; Lee, Diana T. F.; Cheung, Fanny M.
2018-01-01
The present study examined whether, when, and how motivational messaging can boost the response rate of postal surveys for physicians based on Higgin's regulatory focus theory, accounting for its cost-effectiveness. A three-arm, blinded, randomized controlled design was used. A total of 3,270 doctors were randomly selected from the registration…
Annealed central limit theorems for the ising model on random graphs
Giardinà, C.; Giberti, C.; van der Hofstad, R.W.; Prioriello, M.L.
2016-01-01
The aim of this paper is to prove central limit theorems with respect to the annealed measure for the magnetization rescaled by √N of Ising models on random graphs. More precisely, we consider the general rank-1 inhomogeneous random graph (or generalized random graph), the 2-regular configuration
Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features
Directory of Open Access Journals (Sweden)
Sirinoot Boonsuk
2014-01-01
Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Yong; Kim, Song Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)
2014-05-15
In this study, as a preliminary study to develop an implicit method having high accuracy, the distribution characteristics of spherical particles were evaluated by using explicit modeling techniques in various volume packing fractions. This study was performed to evaluate implicitly simulated distribution of randomly packed spheres in a medium. At first, an explicit modeling method to simulate random packed spheres in a hexahedron medium was proposed. The distributed characteristics of l{sub p} and r{sub p}, which are used in the particle position sampling, was estimated. It is analyzed that the use of the direct exponential distribution, which is generally used in the implicit modeling, can cause the distribution bias of the spheres. It is expected that the findings in this study can be utilized for improving the accuracy in using the implicit method. Spherical particles, which are randomly distributed in medium, are utilized for the radiation shields, fusion reactor blanket, fuels of VHTR reactors. Due to the difficulty on the simulation of the stochastic distribution, Monte Carlo (MC) method has been mainly considered as the tool for the analysis of the particle transport. For the MC modeling of the spherical particles, three methods are known; repeated structure, explicit modeling, and implicit modeling. Implicit method (called as the track length sampling method) is a modeling method that is the sampling based modeling technique of each spherical geometry (or track length of the sphere) during the MC simulation. Implicit modeling method has advantages in high computational efficiency and user convenience. However, it is noted that the implicit method has lower modeling accuracy in various finite mediums.
Force Limited Random Vibration Test of TESS Camera Mass Model
Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.
2015-01-01
The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.
Modelling of limestone injection for SO2 capture in a coal fired utility boiler
International Nuclear Information System (INIS)
Kovacik, G.J.; Reid, K.; McDonald, M.M.; Knill, K.
1997-01-01
A computer model was developed for simulating furnace sorbent injection for SO 2 capture in a full scale utility boiler using TASCFlow TM computational fluid dynamics (CFD) software. The model makes use of a computational grid of the superheater section of a tangentially fired utility boiler. The computer simulations are three dimensional so that the temperature and residence time distribution in the boiler could be realistically represented. Results of calculations of simulated sulphur capture performance of limestone injection in a typical utility boiler operation were presented
A Framework for Organizing Current and Future Electric Utility Regulatory and Business Models
Energy Technology Data Exchange (ETDEWEB)
Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cappers, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schwartz, Lisa C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fadrhonc, Emily Martin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-06-01
Many regulators, utilities, customer groups, and other stakeholders are reevaluating existing regulatory models and the roles and financial implications for electric utilities in the context of today’s environment of increasing distributed energy resource (DER) penetrations, forecasts of significant T&D investment, and relatively flat or negative utility sales growth. When this is coupled with predictions about fewer grid-connected customers (i.e., customer defection), there is growing concern about the potential for serious negative impacts on the regulated utility business model. Among states engaged in these issues, the range of topics under consideration is broad. Most of these states are considering whether approaches that have been applied historically to mitigate the impacts of previous “disruptions” to the regulated utility business model (e.g., energy efficiency) as well as to align utility financial interests with increased adoption of such “disruptive technologies” (e.g., shareholder incentive mechanisms, lost revenue mechanisms) are appropriate and effective in the present context. A handful of states are presently considering more fundamental changes to regulatory models and the role of regulated utilities in the ownership, management, and operation of electric delivery systems (e.g., New York “Reforming the Energy Vision” proceeding).
Utility of Monte Carlo Modelling for Holdup Measurements.
Energy Technology Data Exchange (ETDEWEB)
Belian, Anthony P.; Russo, P. A. (Phyllis A.); Weier, Dennis R. (Dennis Ray),
2005-01-01
Non-destructive assay (NDA) measurements performed to locate and quantify holdup in the Oak Ridge K25 enrichment cascade used neutron totals counting and low-resolution gamma-ray spectroscopy. This facility housed the gaseous diffusion process for enrichment of uranium, in the form of UF{sub 6} gas, from {approx} 20% to 93%. Inventory of {sup 235}U inventory in K-25 is all holdup. These buildings have been slated for decontaminatino and decommissioning. The NDA measurements establish the inventory quantities and will be used to assure criticality safety and meet criteria for waste analysis and transportation. The tendency to err on the side of conservatism for the sake of criticality safety in specifying total NDA uncertainty argues, in the interests of safety and costs, for obtaining the best possible value of uncertainty at the conservative confidence level for each item of process equipment. Variable deposit distribution is a complex systematic effect (i.e., determined by multiple independent variables) on the portable NDA results for very large and bulk converters that contributes greatly to total uncertainty for holdup in converters measured by gamma or neutron NDA methods. Because the magnitudes of complex systematic effects are difficult to estimate, computational tools are important for evaluating those that are large. Motivated by very large discrepancies between gamma and neutron measurements of high-mass converters with gamma results tending to dominate, the Monte Carlo code MCNP has been used to determine the systematic effects of deposit distribution on gamma and neutron results for {sup 235}U holdup mass in converters. This paper details the numerical methodology used to evaluate large systematic effects unique to each measurement type, validates the methodology by comparison with measurements, and discusses how modeling tools can supplement the calibration of instruments used for holdup measurements by providing realistic values at well
93-106, 2015 93 Multilevel random effect and marginal models
African Journals Online (AJOL)
Multilevel random effect and marginal models for longitudinal data ... and random effect models that take the correlation among measurements of the same subject ... comparing the level of redness, pain and irritability ... clinical trial evaluating the safety profile of a new .... likelihood-based methods to compare models and.
International Nuclear Information System (INIS)
Yucemen, S.
1991-02-01
The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs
Energy Technology Data Exchange (ETDEWEB)
Yucemen, S [Middle East Technical Univ., Ankara (Turkey). Dept. of Statistics
1991-02-01
The general theory of stationary random functions is utilized to assess the seismic hazard associated with a linearly extending seismic source. The past earthquake occurrence data associated with a portion of the North Anatolian fault are used to demonstrate the implementation of the proposed model. 18 refs, figs and tabs.
Robinson, Angela; Spencer, Anne; Moffatt, Peter
2015-04-01
There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.
MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION
Directory of Open Access Journals (Sweden)
M. Ahmadlou
2016-06-01
Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed
An Analysis/Synthesis System of Audio Signal with Utilization of an SN Model
Directory of Open Access Journals (Sweden)
G. Rozinaj
2004-12-01
Full Text Available An SN (sinusoids plus noise model is a spectral model, in which theperiodic components of the sound are represented by sinusoids withtime-varying frequencies, amplitudes and phases. The remainingnon-periodic components are represented by a filtered noise. Thesinusoidal model utilizes physical properties of musical instrumentsand the noise model utilizes the human inability to perceive the exactspectral shape or the phase of stochastic signals. SN modeling can beapplied in a compression, transformation, separation of sounds, etc.The designed system is based on methods used in the SN modeling. Wehave proposed a model that achieves good results in audio perception.Although many systems do not save phases of the sinusoids, they areimportant for better modelling of transients, for the computation ofresidual and last but not least for stereo signals, too. One of thefundamental properties of the proposed system is the ability of thesignal reconstruction not only from the amplitude but from the phasepoint of view, as well.
International Nuclear Information System (INIS)
Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon
2014-01-01
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network
An approach for evaluating utility-financed energy conservation programs. The economic welfare model
Energy Technology Data Exchange (ETDEWEB)
Costello, K W; Galen, P S
1985-09-01
The main objective of this paper is to illustrate how the economic welfare model may be used to measure the economic efficiency effects of utility-financed energy conservation programs. The economic welfare model is the theoretical structure that was used in this paper to develop a cost/benefit test. This test defines the net benefit of a conservation program as the change in the sum of consumer and producer surplus. The authors advocate the operation of the proposed cost/benefit model as a screening tool to eliminate from more detailed review those programs where the expected net benefits are less than zero. The paper presents estimates of the net benefit derived from different specified cost/benefit models for four illustrative pilot programs. These models are representative of those which have been applied or are under review by utilities and public utility commissions. From the numerical results, it is shown that net benefit is greatly affected by the assumptions made about the nature of welfare gains to program participants. The main conclusion that emerges from the numerical results is that the selection of a cost/benefit model is a crucial element in evaluating utility-financed energy conservation programs. The paper also briefly addresses some of the major unresolved issues in utility-financed energy conservation programs. 2 figs., 3 tabs., 10 refs. (A.V.)
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Goeree, Ron; Hopkins, Rob; Marshall, John K; Armstrong, David; Ungar, Wendy J; Goldsmith, Charles; Allen, Christopher; Anvari, Mehran
2011-01-01
Very few randomized controlled trials (RCTs) have compared laparoscopic Nissen fundoplication (LNF) to proton pump inhibitors (PPI) medical management for patients with chronic gastroesophageal reflux disease (GERD). Larger RCTs have been relatively short in duration, and have reported mixed results regarding symptom control and effect on quality of life (QOL). Economic evaluations have reported conflicting results. To determine the incremental cost-utility of LNF versus PPI for treating patients with chronic and controlled GERD over 3 years from the societal perspective. Economic evaluation was conducted alongside a RCT that enrolled 104 patients from October 2000 to September 2004. Primary study outcome was GERD symptoms (secondary outcomes included QOL and cost-utility). Resource utilization and QOL data collected at regular follow-up intervals determined incremental cost/QALY gained. Stochastic uncertainty was assessed using bootstrapping and methodologic assumptions were assessed using sensitivity analysis. No statistically significant differences in GERD symptom scores, but LNF did result in fewer heartburn days and improved QOL. Costs were higher for LNF patients by $3205/patient over 3 years but QOL was also higher as measured by either QOL instrument. Based on total costs, incremental cost-utility of LNF was $29,404/QALY gained using the Health Utility Index 3. Cost-utility results were sensitive to the utility instrument used ($29,404/QALY for Health Utility Index 3, $31,117/QALY for the Short Form 6D, and $76,310/QALY for EuroQol 5D) and if current lower prices for PPIs were used in the analysis. Results varied depending on resource use/costs included in the analysis, the QOL instrument used, and the cost of PPIs; however, LNF was generally found to be a cost-effective treatment for patients with symptomatic controlled GERD requiring long-term management. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR
What’s Needed from Climate Modeling to Advance Actionable Science for Water Utilities?
Barsugli, J. J.; Anderson, C. J.; Smith, J. B.; Vogel, J. M.
2009-12-01
“…perfect information on climate change is neither available today nor likely to be available in the future, but … over time, as the threats climate change poses to our systems grow more real, predicting those effects with greater certainty is non-discretionary. We’re not yet at a level at which climate change projections can drive climate change adaptation.” (Testimony of WUCA Staff Chair David Behar to the House Committee on Science and Technology, May 5, 2009) To respond to this challenge, the Water Utility Climate Alliance (WUCA) has sponsored a white paper titled “Options for Improving Climate Modeling to Assist Water Utility Planning for Climate Change. ” This report concerns how investments in the science of climate change, and in particular climate modeling and downscaling, can best be directed to help make climate projections more actionable. The meaning of “model improvement” can be very different depending on whether one is talking to a climate model developer or to a water manager trying to incorporate climate projections in to planning. We first surveyed the WUCA members on present and potential uses of climate model projections and on climate inputs to their various system models. Based on those surveys and on subsequent discussions, we identified four dimensions along which improvement in modeling would make the science more “actionable”: improved model agreement on change in key parameters; narrowing the range of model projections; providing projections at spatial and temporal scales that match water utilities system models; providing projections that water utility planning horizons. With these goals in mind we developed four options for improving global-scale climate modeling and three options for improving downscaling that will be discussed. However, there does not seem to be a single investment - the proverbial “magic bullet” -- which will substantially reduce the range of model projections at the scales at which utility
Nonlinear optical spectroscopy and microscopy of model random and biological media
Guo, Yici
Nonlinear optical (NLO) spectroscopy and microscopy applied to biomedical science are emerging as new and rapidly growing areas which offer important insight into basic phenomena. Ultrafast NLO processes provide temporal, spectral and spatial sensitivities complementary or superior to those achieved through conventional linear optical approaches. The goal of this thesis is to explore the potential of two fundamental NLO processes to produce noninvasive histological maps of biological tissues. Within the goal of the thesis, steady state intensity, polarization and angular measurements of second- and third-harmonic generations (SHG, THG) have been performed on model random scattering and animal tissue samples. The nonlinear optical effects have been evaluated using models. Conversion efficiencies of SHG and THG from animal tissue interfaces have been determined, ranging from 10-7 to 10-10. The changes in the multiharmonic signals were found to depend on both local and overall histological structures of biological samples. The spectral signatures of two photon excitation induced fluorescence from intrinsic fluorophores have been acquired and used to characterize the physical state and types of tissues. Two dimensional scanning SHG and TPF tomographic images have been obtained from in vitro animal tissues, normal and diseased human breast tissues, and resolved subsurface layers and histo-chemical distributions. By combining consecutive 2D maps, a 3D image can be produced. The structure and morphology dependence of the SH signal has been utilized to image and evaluate subsurface tumor progression depth. Second harmonic microscopy in model random and biological cells has been studied using a CCD camera to obtain direct images from subcellular structures. Finally, near infrared (NIR) NLO spectroscopy and microscopy based on SHG and TPF have demonstrated high spatial resolution, deeper penetration depth, low level photo-damaging and enhanced morphological sensitivity for
Utilizing Data Mining for Predictive Modeling of Colorectal Cancer using Electronic Medical Records
Hoogendoorn, M.; Moons, L.G.; Numans, M.E.; Sips, R.J.
2014-01-01
Colorectal cancer (CRC) is a relatively common cause of death around the globe. Predictive models for the development of CRC could be highly valuable and could facilitate an early diagnosis and increased survival rates. Currently available predictive models are improving, but do not fully utilize
Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model
Directory of Open Access Journals (Sweden)
Jiping Yang
2017-09-01
Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.
Local Stability Conditions for Two Types of Monetary Models with Recursive Utility
Miyazaki, Kenji; Utsunomiya, Hitoshi
2009-01-01
This paper explores local stability conditions for money-in-utility-function (MIUF) and transaction-costs (TC) models with recursive utility.A monetary variant of the Brock-Gale condition provides a theoretical justification of the comparative statics analysis. One of sufficient conditions for local stability is increasing marginal impatience (IMI) in consumption and money. However, this does not deny the possibility of decreasing marginal impatience (DMI). The local stability with DMI is mor...
Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope
Energy Technology Data Exchange (ETDEWEB)
Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)
2014-11-15
In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.
User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs
Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.
2008-01-01
This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.
Recent developments in exponential random graph (p*) models for social networks
Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa
This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over
Keenan, Lisa A; Marshall, Linda L; Eve, Susan
2002-01-01
Psychosocial vulnerabilities were added to a model of healthcare utilization. This extension was tested among low-income women with ethnicity addressed as a moderator. Structured interviews were conducted at 2 points in time, approximately 1 year apart. The constructs of psychosocial vulnerability, demographic predisposing, barriers, and illness were measured by multiple indicators to allow use of Structural Equation Modeling to analyze results. The models were tested separately for each ethnic group. Community office. African-American (N = 266), Euro-American (N = 200), and Mexican-American (N = 210) women were recruited from the Dallas Metropolitan area to participate in Project Health Outcomes of Women, a multi-year, multi-wave study. Face-to-face interviews were conducted with this sample. Participants had been in heterosexual relationships for at least 1 year, were between 20 and 49 years of age, and had incomes less than 200% of the national poverty level. Healthcare utilization, defined as physician visits and general healthcare visits. Illness mediated the effect of psychosocial vulnerability on healthcare utilization for African Americans and Euro-Americans. The model for Mexican Americans was the most complex. Psychosocial vulnerability on illness was partially mediated by barriers, which also directly affected utilization. Psychosocial vulnerabilities were significant utilization predictors for healthcare use for all low-income women in this study. The final models for the 2 minority groups, African Americans and Mexican Americans, were quite different. Hence, women of color should not be considered a homogeneous group in comparison to Euro-Americans.
Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.
2017-12-01
Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.
International Nuclear Information System (INIS)
Kang, Li; Tang, Sanyi
2016-01-01
Highlights: • The discrete single species and multiple species models with random perturbation are proposed. • The complex dynamics and interesting bifurcation behavior have been investigated. • The reverse effects of random perturbation on discrete systems have been discussed and revealed. • The main results can be applied for pest control and resources management. - Abstract: The natural species are likely to present several interesting and complex phenomena under random perturbations, which have been confirmed by simple mathematical models. The important questions are: how the random perturbations influence the dynamics of the discrete population models with multiple steady states or multiple species interactions? and is there any different effects for single species and multiple species models with random perturbation? To address those interesting questions, we have proposed the discrete single species model with two stable equilibria and the host-parasitoid model with Holling type functional response functions to address how the random perturbation affects the dynamics. The main results indicate that the random perturbation does not change the number of blurred orbits of the single species model with two stable steady states compared with results for the classical Ricker model with same random perturbation, but it can strength the stability. However, extensive numerical investigations depict that the random perturbation does not influence the complexities of the host-parasitoid models compared with the results for the models without perturbation, while it does increase the period of periodic orbits doubly. All those confirm that the random perturbation has a reverse effect on the dynamics of the discrete single and multiple population models, which could be applied in reality including pest control and resources management.
Bayesian analysis for exponential random graph models using the adaptive exchange sampler
Jin, Ick Hoon; Liang, Faming; Yuan, Ying
2013-01-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Review of utility values for economic modeling in type 2 diabetes.
Beaudet, Amélie; Clegg, John; Thuresson, Per-Olof; Lloyd, Adam; McEwan, Phil
2014-06-01
Economic analysis in type 2 diabetes mellitus (T2DM) requires an assessment of the effect of a wide range of complications. The objective of this article was to identify a set of utility values consistent with the National Institute for Health and Care Excellence (NICE) reference case and to critically discuss and illustrate challenges in creating such a utility set. A systematic literature review was conducted to identify studies reporting utility values for relevant complications. The methodology of each study was assessed for consistency with the NICE reference case. A suggested set of utility values applicable to modeling was derived, giving preference to studies reporting multiple complications and correcting for comorbidity. The review considered 21 relevant diabetes complications. A total of 16,574 articles were identified; after screening, 61 articles were assessed for methodological quality. Nineteen articles met NICE criteria, reporting utility values for 20 of 21 relevant complications. For renal transplant, because no articles meeting NICE criteria were identified, two articles using other methodologies were included. Index value estimates for T2DM without complication ranged from 0.711 to 0.940. Utility decrement associated with complications ranged from 0.014 (minor hypoglycemia) to 0.28 (amputation). Limitations associated with the selection of a utility value for use in economic modeling included variability in patient recruitment, heterogeneity in statistical analysis, large variability around some point estimates, and lack of recent data. A reference set of utility values for T2DM and its complications in line with NICE requirements was identified. This research illustrates the challenges associated with systematically selecting utility data for economic evaluations. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Berkhof, Farida F; van den Berg, Jan W K; Uil, Steven M; Kerstjens, Huib A M
2015-02-01
Telemedicine, care provided by electronic communication, may serve as an alternative or extension to traditional outpatient visits. This pilot study determined the effects of telemedicine on health-care utilization and health status of chronic obstructive pulmonary disease (COPD) patients. One hundred and one patients were randomized, 52 patients received telemedicine care and 49 had traditional outpatient visits. The primary outcome was COPD-specific health status, measured with the Clinical COPD Questionnaire (CCQ). Secondary outcomes included St. George's Respiratory Questionnaire (SGRQ) and the Short Form-36 (SF-36) and resource use in primary and secondary care. The mean age of the participants was 68 ± 9 years and the mean per cent of predicted forced expiratory volume in 1 s was 40.4 ± 12.5. The CCQ total score deteriorated by 0.14 ± 0.13 in the telemedicine group, and improved by -0.03 ± 0.14 in the control group (difference 0.17 ± 0.19, 95% confidence interval (CI): -0.21-0.55, P = 0.38). The CCQ symptom domain showed a significant and clinically relevant difference in favour of the control group, 0.52 ± 0.24 (95% CI: 0.04-0.10, P = 0.03). Similar results were found for the SGRQ, whereas results for SF-36 were inconsistent. Patients in the control group had significantly fewer visits to the pulmonologist in comparison to patients in the telemedicine group (P = 0.05). The same trend, although not significant, was found for exacerbations after 6 months. This telemedicine model of initiated phone calls by a health-care provider had a negative effect on health status and resource use in primary and secondary care, in comparison with usual care and therefore cannot be recommended in COPD patients in its current form. © 2014 Asian Pacific Society of Respirology.
Square-lattice random Potts model: criticality and pitchfork bifurcation
International Nuclear Information System (INIS)
Costa, U.M.S.; Tsallis, C.
1983-01-01
Within a real space renormalization group framework based on self-dual clusters, the criticality of the quenched bond-mixed q-state Potts ferromagnet on square lattice is discussed. On qualitative grounds it is exhibited that the crossover from the pure fixed point to the random one occurs, while q increases, through a pitchfork bifurcation; the relationship with Harris criterion is analyzed. On quantitative grounds high precision numerical values are presented for the critical temperatures corresponding to various concentrations of the coupling constants J 1 and J 2 , and various ratios J 1 /J 2 . The pure, random and crossover critical exponents are discussed as well. (Author) [pt
Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows
DEFF Research Database (Denmark)
Karacaören, Burak; Janss, Luc; Kadarmideen, Haja
Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...
Modeling the Dynamic Interrelations between Mobility, Utility, and Land Asking Price
Hidayat, E.; Rudiarto, I.; Siegert, F.; Vries, W. D.
2018-02-01
Limited and insufficient information about the dynamic interrelation among mobility, utility, and land price is the main reason to conduct this research. Several studies, with several approaches, and several variables have been conducted so far in order to model the land price. However, most of these models appear to generate primarily static land prices. Thus, a research is required to compare, design, and validate different models which calculate and/or compare the inter-relational changes of mobility, utility, and land price. The applied method is a combination of analysis of literature review, expert interview, and statistical analysis. The result is newly improved mathematical model which have been validated and is suitable for the case study location. This improved model consists of 12 appropriate variables. This model can be implemented in the Salatiga city as the case study location in order to arrange better land use planning to mitigate the uncontrolled urban growth.
Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising
Directory of Open Access Journals (Sweden)
Chien-ming Chou
2014-01-01
Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.
A queueing model with randomized depletion of inventory
Albrecher, H.-J.; Boxma, O.J.; Essifi, R.; Kuijstermans, A.C.M.
2015-01-01
In this paper we study an M/M/1 queue, where the server continues to work during idle periods and builds up inventory. This inventory is used for new arriving service requirements, but it is completely emptied at random epochs of a Poisson process, whose rate depends on the current level of the
Positive random fields for modeling material stiffness and compliance
DEFF Research Database (Denmark)
Hasofer, Abraham Michael; Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob
1998-01-01
Positive random fields with known marginal properties and known correlation function are not numerous in the literature. The most prominent example is the log\\-normal field for which the complete distribution is known and for which the reciprocal field is also lognormal. It is of interest to supp...
Utilizing Gaze Behavior for Inferring Task Transitions Using Abstract Hidden Markov Models
Directory of Open Access Journals (Sweden)
Daniel Fernando Tello Gamarra
2016-12-01
Full Text Available We demonstrate an improved method for utilizing observed gaze behavior and show that it is useful in inferring hand movement intent during goal directed tasks. The task dynamics and the relationship between hand and gaze behavior are learned using an Abstract Hidden Markov Model (AHMM. We show that the predicted hand movement transitions occur consistently earlier in AHMM models with gaze than those models that do not include gaze observations.
Direct estimates of unemployment rate and capacity utilization in macroeconometric models
Energy Technology Data Exchange (ETDEWEB)
Klein, L R [Univ. of Pennsylvania, Philadelphia; Su, V
1979-10-01
The problem of measuring resource-capacity utilization as a factor in overall economic efficiency is examined, and a tentative solution is offered. A macro-econometric model is applied to the aggregate production function by linking unemployment rate and capacity utilization rate. Partial- and full-model simulations use Wharton indices as a filter and produce direct estimates of unemployment rates. The simulation paths of durable-goods industries, which are more capital-intensive, are found to be more sensitive to business cycles than the nondurable-goods industries. 11 references.
DIAMOND: A model of incremental decision making for resource acquisition by electric utilities
Energy Technology Data Exchange (ETDEWEB)
Gettings, M.; Hirst, E.; Yourstone, E.
1991-02-01
Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth; the lifetimes and performances of existing power plants; the construction times, costs, and performances of new resources being brought online; and the regulatory and economic environment in which utilities operate. This report describes a utility planning model that focuses on frequent and incremental decisions. The key features of this model are its explicit treatment of uncertainty, frequent user interaction with the model, and the ability to change prior decisions. The primary strength of this model is its representation of the planning and decision-making environment that utility planners and executives face. Users interact with the model after every year or two of simulation, which provides an opportunity to modify past decisions as well as to make new decisions. For example, construction of a power plant can be started one year, and if circumstances change, the plant can be accelerated, mothballed, canceled, or continued as originally planned. Similarly, the marketing and financial incentives for demand-side management programs can be changed from year to year, reflecting the short lead time and small unit size of these resources. This frequent user interaction with the model, an operational game, should build greater understanding and insights among utility planners about the risks associated with different types of resources. The model is called DIAMOND, Decision Impact Assessment Model. In consists of four submodels: FUTURES, FORECAST, SIMULATION, and DECISION. It runs on any IBM-compatible PC and requires no special software or hardware. 19 refs., 13 figs., 15 tabs.
Studies in astronomical time series analysis: Modeling random processes in the time domain
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
DEFF Research Database (Denmark)
Soegaard, Rikke; Bünger, Cody E; Christiansen, Terkel
2007-01-01
STUDY DESIGN: Cost-utility evaluation of a randomized, controlled trial with a 4- to 8-year follow-up. OBJECTIVE: To investigate the incremental cost per quality-adjusted-life-year (QALY) when comparing circumferential fusion to posterolateral fusion in a long-term, societal perspective. SUMMARY...... OF BACKGROUND DATA: The cost-effectiveness of circumferential fusion in a long-term perspective is uncertain but nonetheless highly relevant as the ISSLS prize winner 2006 in clinical studies reported the effect of circumferential fusion superior to the effect of posterolateral fusion. A recent trial found...... no significant difference between posterolateral and circumferential fusion reporting cost-effectiveness from a 2-year viewpoint. METHODS: A total of 146 patients were randomized to posterolateral or circumferential fusion and followed 4 to 8 years after surgery. The mean age of the cohort was 46 years (range...
Walls, Brittany D; Wallace, Elizabeth R; Brothers, Stacey L; Berry, David T R
2017-12-01
Recent concern about malingered self-report of symptoms of attention-deficit hyperactivity disorder (ADHD) in college students has resulted in an urgent need for scales that can detect feigning of this disorder. The present study provided further validation data for a recently developed validity scale for the Conners' Adult ADHD Rating Scale (CAARS), the CAARS Infrequency Index (CII), as well as for the Inconsistency Index (INC). The sample included 139 undergraduate students: 21 individuals with diagnoses of ADHD, 29 individuals responding honestly, 54 individuals responding randomly (full or half), and 35 individuals instructed to feign. Overall, the INC showed moderate sensitivity to random responding (.44-.63) and fairly high specificity to ADHD (.86-.91). The CII demonstrated modest sensitivity to feigning (.31-.46) and excellent specificity to ADHD (.91-.95). Sequential application of validity scales had correct classification rates of honest (93.1%), ADHD (81.0%), feigning (57.1%), half random (42.3%), and full random (92.9%). The present study suggests that the CII is modestly sensitive (true positive rate) to feigned ADHD symptoms, and highly specific (true negative rate) to ADHD. Additionally, this study highlights the utility of applying the CAARS validity scales in a sequential manner for identifying feigning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Application of the resonating Hartree-Fock random phase approximation to the Lipkin model
International Nuclear Information System (INIS)
Nishiyama, S.; Ishida, K.; Ido, M.
1996-01-01
We have applied the resonating Hartree-Fock (Res-HF) approximation to the exactly solvable Lipkin model by utilizing a newly developed orbital-optimization algorithm. The Res-HF wave function was superposed by two Slater determinants (S-dets) which give two corresponding local energy minima of monopole ''deformations''. The self-consistent Res-HF calculation gives an excellent ground-state correlation energy. There exist excitations due to small vibrational fluctuations of the orbitals and mixing coefficients around their stationary values. They are described by a new approximation called the resonating Hartree-Fock random phase approximation (Res-HF RPA). Matrices of the second-order variation of the Res-HF energy have the same structures as those of the Res-HF RPA's matrices. The quadratic steepest descent of the Res-HF energy in the orbital optimization is considered to include certainly both effects of RPA-type fluctuations up to higher orders and their mode-mode couplings. It is a very important and interesting task to apply the Res-HF RPA to the Lipkin model with the use of the stationary values and to prove the above argument. It turns out that the Res-HF RPA works far better than the usual HF RPA and the renormalized one. We also show some important features of the Res-HF RPA. (orig.)
BOX-COX transformation and random regression models for fecal egg count data
Directory of Open Access Journals (Sweden)
Marcos Vinicius Silva
2012-01-01
Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi
2017-09-01
It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.
Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.
da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C
2011-01-01
Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.
DEFF Research Database (Denmark)
Eklund, Andreas; Axén, Iben; Kongsted, Alice
2014-01-01
is the number of days with bothersome pain over 12 months. Secondary measures are self-rated health (EQ-5D), function (the Roland Morris Disability Questionnaire), psychological profile (the Multidimensional Pain Inventory), pain intensity (the Numeric Rating Scale), and work absence.The primary utility measure...... of the study is quality-adjusted life years and will be calculated using the EQ-5D questionnaire. Direct medical costs as well as indirect costs will be considered.Subjects are randomly allocated into two treatment arms: 1) Symptom-guided treatment (patient controlled), receiving care when patients feel a need....... Strict inclusion criteria should ensure a suitable target group and the use of frequent data collection should provide an accurate outcome measurement. The study utilizes normal clinical procedures, which should aid the transferability of the results.Trial registration: Clinical trials.gov; NCT01539863...
DEFF Research Database (Denmark)
Momsen, Anne-Mette H.; Stapelfeldt, Christina Malmose; Nielsen, Claus Vinther
2016-01-01
) and odds ratio (OR) were used as measures of associations. Results were adjusted for gender, age, educational level, work ability and previous sick leave. Results: Among all responders we found no effect of the intervention on RTW. Among participants with low health anxiety, the one-year probability of RTW......Background: The aim of the RCT study was to investigate if the effect of a multidisciplinary intervention on return to work (RTW) and health care utilization differed by participants’ self-reported health status at baseline, defined by a) level of somatic symptoms, b) health anxiety and c) self......-reported general health. Methods: A total of 443 individuals were randomized to the intervention (n = 301) or the control group (n = 142) and responded to a questionnaire measuring health status at baseline. Participants were followed in registries measuring RTW and health care utilization. Relative risk (RR...
Analysis of time to event outcomes in randomized controlled trials by generalized additive models.
Directory of Open Access Journals (Sweden)
Christos Argyropoulos
Full Text Available Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking.By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population.PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data.By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial results under proportional and
DEFF Research Database (Denmark)
Adamina, Michel; Kehlet, Henrik; Tomlinson, George A
2011-01-01
in costs that threatens the stability of health care systems. Enhanced recovery pathways (ERP) have been proposed as a means to reduce morbidity and improve effectiveness of care. We have reviewed the evidence supporting the implementation of ERP in clinical practice. Methods Medline, Embase...... of health care processes. Thus, while accelerating recovery and safely reducing hospital stay, ERPs optimize utilization of health care resources. ERPs can and should be routinely used in care after colorectal and other major gastrointestinal procedures....
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Energy Technology Data Exchange (ETDEWEB)
Mai, Trieu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barrows, Clayton [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lopez, Anthony [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hale, Elaine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dyson, Mark [National Renewable Energy Lab. (NREL), Golden, CO (United States); Eurek, Kelly [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2015-04-01
In this report, we analyze the impacts of model configuration and detail in capacity expansion models, computational tools used by utility planners looking to find the least cost option for planning the system and by researchers or policy makers attempting to understand the effects of various policy implementations. The present analysis focuses on the importance of model configurations — particularly those related to capacity credit, dispatch modeling, and transmission modeling — to the construction of scenario futures. Our analysis is primarily directed toward advanced tools used for utility planning and is focused on those impacts that are most relevant to decisions with respect to future renewable capacity deployment. To serve this purpose, we develop and employ the NREL Resource Planning Model to conduct a case study analysis that explores 12 separate capacity expansion scenarios of the Western Interconnection through 2030.
Activated aging dynamics and effective trap model description in the random energy model
Baity-Jesi, M.; Biroli, G.; Cammarota, C.
2018-01-01
We study the out-of-equilibrium aging dynamics of the random energy model (REM) ruled by a single spin-flip Metropolis dynamics. We focus on the dynamical evolution taking place on time-scales diverging with the system size. Our aim is to show to what extent the activated dynamics displayed by the REM can be described in terms of an effective trap model. We identify two time regimes: the first one corresponds to the process of escaping from a basin in the energy landscape and to the subsequent exploration of high energy configurations, whereas the second one corresponds to the evolution from a deep basin to the other. By combining numerical simulations with analytical arguments we show why the trap model description does not hold in the former but becomes exact in the second.
The Utility of the UTAUT Model in Explaining Mobile Learning Adoption in Higher Education in Guyana
Thomas, Troy Devon; Singh, Lenandlar; Gaffar, Kemuel
2013-01-01
In this paper, we compare the utility of modified versions of the unified theory of acceptance and use of technology (UTAUT) model in explaining mobile learning adoption in higher education in a developing country and evaluate the size and direction of the impacts of the UTAUT factors on behavioural intention to adopt mobile learning in higher…
IAPCS: A COMPUTER MODEL THAT EVALUATES POLLUTION CONTROL SYSTEMS FOR UTILITY BOILERS
The IAPCS model, developed by U.S. EPA`s Air and Energy Engineering Research Laboratory and made available to the public through the National Technical Information Service, can be used by utility companies, architectural and engineering companies, and regulatory agencies at all l...
An integrated utility-based model of conflict evaluation and resolution in the Stroop task.
Chuderski, Adam; Smolen, Tomasz
2016-04-01
Cognitive control allows humans to direct and coordinate their thoughts and actions in a flexible way, in order to reach internal goals regardless of interference and distraction. The hallmark test used to examine cognitive control is the Stroop task, which elicits both the weakly learned but goal-relevant and the strongly learned but goal-irrelevant response tendencies, and requires people to follow the former while ignoring the latter. After reviewing the existing computational models of cognitive control in the Stroop task, its novel, integrated utility-based model is proposed. The model uses 3 crucial control mechanisms: response utility reinforcement learning, utility-based conflict evaluation using the Festinger formula for assessing the conflict level, and top-down adaptation of response utility in service of conflict resolution. Their complex, dynamic interaction led to replication of 18 experimental effects, being the largest data set explained to date by 1 Stroop model. The simulations cover the basic congruency effects (including the response latency distributions), performance dynamics and adaptation (including EEG indices of conflict), as well as the effects resulting from manipulations applied to stimulation and responding, which are yielded by the extant Stroop literature. (c) 2016 APA, all rights reserved).
A utility-theoretic model for QALYs and willingness to pay.
Klose, Thomas
2003-01-01
Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.
Utilizing the PREPaRE Model When Multiple Classrooms Witness a Traumatic Event
Bernard, Lisa J.; Rittle, Carrie; Roberts, Kathy
2011-01-01
This article presents an account of how the Charleston County School District responded to an event by utilizing the PREPaRE model (Brock, et al., 2009). The acronym, PREPaRE, refers to a range of crisis response activities: P (prevent and prepare for psychological trauma), R (reaffirm physical health and perceptions of security and safety), E…
Entropy-optimal weight constraint elicitation with additive multi-attribute utility models
Valkenhoef , van Gert; Tervonen, Tommi
2016-01-01
We consider the elicitation of incomplete preference information for the additive utility model in terms of linear constraints on the weights. Eliciting incomplete preferences using holistic pair-wise judgments is convenient for the decision maker, but selecting the best pair-wise comparison is
Numerical Simulation of Entropy Growth for a Nonlinear Evolutionary Model of Random Markets
Directory of Open Access Journals (Sweden)
Mahdi Keshtkar
2016-01-01
Full Text Available In this communication, the generalized continuous economic model for random markets is revisited. In this model for random markets, agents trade by pairs and exchange their money in a random and conservative way. They display the exponential wealth distribution as asymptotic equilibrium, independently of the effectiveness of the transactions and of the limitation of the total wealth. In the current work, entropy of mentioned model is defined and then some theorems on entropy growth of this evolutionary problem are given. Furthermore, the entropy increasing by simulation on some numerical examples is verified.
Modeling of reactive chemical transport of leachates from a utility fly-ash disposal site
International Nuclear Information System (INIS)
Apps, J.A.; Zhu, M.; Kitanidis, P.K.; Freyberg, D.L.; Ronan, A.D.; Itakagi, S.
1991-04-01
Fly ash from fossil-fuel power plants is commonly slurried and pumped to disposal sites. The utility industry is interested in finding out whether any hazardous constituents might leach from the accumulated fly ash and contaminate ground and surface waters. To evaluate the significance of this problem, a representative site was selected for modeling. FASTCHEM, a computer code developed for the Electric Power Research Institute, was utilized for the simulation of the transport and fate of the fly-ash leachate. The chemical evolution of the leachate was modeled as it migrated along streamtubes defined by the flow model. The modeling predicts that most of the leachate seeps through the dam confining the ash pond. With the exception of ferrous, manganous, sulfate and small amounts of nickel ions, all other dissolved constituents are predicted to discharge at environmentally acceptable concentrations
Clinical Utility of the DSM-5 Alternative Model of Personality Disorders
DEFF Research Database (Denmark)
Bach, Bo; Markon, Kristian; Simonsen, Erik
2015-01-01
In Section III, Emerging Measures and Models, DSM-5 presents an Alternative Model of Personality Disorders, which is an empirically based model of personality pathology measured with the Level of Personality Functioning Scale (LPFS) and the Personality Inventory for DSM-5 (PID-5). These novel...... instruments assess level of personality impairment and pathological traits. Objective. A number of studies have supported the psychometric qualities of the LPFS and the PID-5, but the utility of these instruments in clinical assessment and treatment has not been extensively evaluated. The goal of this study...... was to evaluate the clinical utility of this alternative model of personality disorders. Method. We administered the LPFS and the PID-5 to psychiatric outpatients diagnosed with personality disorders and other nonpsychotic disorders. The personality profiles of six characteristic patients were inspected...
van Kasteren, T.L.M.; Noulas, A.K.; Kröse, B.J.A.; Smit, G.J.M.; Epema, D.H.J.; Lew, M.S.
2008-01-01
Conditional Random Fields are a discriminative probabilistic model which recently gained popularity in applications that require modeling nonindependent observation sequences. In this work, we present the basic advantages of this model over generative models and argue about its suitability in the
Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees
Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der
2018-04-01
We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.
DEFF Research Database (Denmark)
Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne
2014-01-01
Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...
[Home health resource utilization measures using a case-mix adjustor model].
You, Sun-Ju; Chang, Hyun-Sook
2005-08-01
The purpose of this study was to measure home health resource utilization using a Case-Mix Adjustor Model developed in the U.S. The subjects of this study were 484 patients who had received home health care more than 4 visits during a 60-day episode at 31 home health care institutions. Data on the 484 patients had to be merged onto a 60-day payment segment. Based on the results, the researcher classified home health resource groups (HHRG). The subjects were classified into 34 HHRGs in Korea. Home health resource utilization according to clinical severity was in order of Minimum (C0) service utilization moderate), and the lowest 97,000 won in group C2F3S1, so the former was 5.82 times higher than the latter. Resource utilization in home health care has become an issue of concern due to rising costs for home health care. The results suggest the need for more analytical attention on the utilization and expenditures for home care using a Case-Mix Adjustor Model.
Developing a clinical utility framework to evaluate prediction models in radiogenomics
Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.
2015-03-01
Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.
Random vibration sensitivity studies of modeling uncertainties in the NIF structures
International Nuclear Information System (INIS)
Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.
1996-01-01
The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m 2 at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 microm ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10 -10 g 2 /Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 microrad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed
Utility of Social Modeling in Assessment of a State's Propensity for Nuclear Proliferation
International Nuclear Information System (INIS)
Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.
2011-01-01
This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.
Utility of Social Modeling in Assessment of a State’s Propensity for Nuclear Proliferation
Energy Technology Data Exchange (ETDEWEB)
Coles, Garill A.; Brothers, Alan J.; Whitney, Paul D.; Dalton, Angela C.; Olson, Jarrod; White, Amanda M.; Cooley, Scott K.; Youchak, Paul M.; Stafford, Samuel V.
2011-06-01
This report is the third and final report out of a set of three reports documenting research for the U.S. Department of Energy (DOE) National Security Administration (NASA) Office of Nonproliferation Research and Development NA-22 Simulations, Algorithms, and Modeling program that investigates how social modeling can be used to improve proliferation assessment for informing nuclear security, policy, safeguards, design of nuclear systems and research decisions. Social modeling has not to have been used to any significant extent in a proliferation studies. This report focuses on the utility of social modeling as applied to the assessment of a State's propensity to develop a nuclear weapons program.
Target-oriented utility theory for modeling the deterrent effects of counterterrorism
International Nuclear Information System (INIS)
Bier, Vicki M.; Kosanoglu, Fuat
2015-01-01
Optimal resource allocation in security has been a significant challenge for critical infrastructure protection. Numerous studies use game theory as the method of choice, because of the fact that an attacker can often observe the defender’s investment in security and adapt his choice of strategies accordingly. However, most of these models do not explicitly consider deterrence, with the result that they may lead to wasted resources if less investment would be sufficient to deter an attack. In this paper, we assume that the defender is uncertain about the level of defensive investment that would deter an attack, and use the target-oriented utility to optimize the level of defensive investment, taking into account the probability of deterrence. - Highlights: • We propose a target-oriented utility model for attacker deterrence. • We model attack deterrence as a function of attacker success probability. • We compare target-oriented utility model and conventional game-theoretical model. • Results show that our model results better value of the defender’s objective function. • Results support that defending series systems is more difficult than parallel systems
Why environmental and resource economists should care about non-expected utility models
Energy Technology Data Exchange (ETDEWEB)
Shaw, W. Douglass; Woodward, Richard T. [Department of Agricultural Economics, Texas A and M University (United States)
2008-01-15
Experimental and theoretical analysis has shown that the conventional expected utility (EU) and subjective expected utility (SEU) models, which are linear in probabilities, have serious limitations in certain situations. We argue here that these limitations are often highly relevant to the work that environmental and natural resource economists do. We discuss some of the experimental evidence and alternatives to the SEU. We consider the theory used, the problems studied, and the methods employed by resource economists. Finally, we highlight some recent work that has begun to use some of the alternatives to the EU and SEU frameworks and discuss areas where much future work is needed. (author)
The Random Walk Model Based on Bipartite Network
Directory of Open Access Journals (Sweden)
Zhang Man-Dun
2016-01-01
Full Text Available With the continuing development of the electronic commerce and growth of network information, there is a growing possibility for citizens to be confused by the information. Though the traditional technology of information retrieval have the ability to relieve the overload of information in some extent, it can not offer a targeted personality service based on user’s interests and activities. In this context, the recommendation algorithm arose. In this paper, on the basis of conventional recommendation, we studied the scheme of random walk based on bipartite network and the application of it. We put forward a similarity measurement based on implicit feedback. In this method, a uneven character vector is imported(the weight of item in the system. We put forward a improved random walk pattern which make use of partial or incomplete neighbor information to create recommendation information. In the end, there is an experiment in the real data set, the recommendation accuracy and practicality are improved. We promise the reality of the result of the experiment
Numerical modelling of random walk one-dimensional diffusion
International Nuclear Information System (INIS)
Vamos, C.; Suciu, N.; Peculea, M.
1996-01-01
The evolution of a particle which moves on a discrete one-dimensional lattice, according to a random walk low, approximates better the diffusion process smaller the steps of the spatial lattice and time are. For a sufficiently large assembly of particles one can assume that their relative frequency at lattice knots approximates the distribution function of the diffusion process. This assumption has been tested by simulating on computer two analytical solutions of the diffusion equation: the Brownian motion and the steady state linear distribution. To evaluate quantitatively the similarity between the numerical and analytical solutions we have used a norm given by the absolute value of the difference of the two solutions. Also, a diffusion coefficient at any lattice knots and moment of time has been calculated, by using the numerical solution both from the diffusion equation and the particle flux given by Fick's low. The difference between diffusion coefficient of analytical solution and the spatial lattice mean coefficient of numerical solution constitutes another quantitative indication of the similarity of the two solutions. The results obtained show that the approximation depends first on the number of particles at each knot of the spatial lattice. In conclusion, the random walk is a microscopic process of the molecular dynamics type which permits simulations precision of the diffusion processes with given precision. The numerical method presented in this work may be useful both in the analysis of real experiments and for theoretical studies
Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network
Directory of Open Access Journals (Sweden)
Wang Jina
2016-01-01
Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.
Ganz, Michael L; Hansen, Brian Bekker; Valencia, Xavier; Strandberg-Larsen, Martin
2015-05-01
Economic evaluation is becoming more common and important as new biologic therapies for rheumatoid arthritis (RA) are developed. While much has been published about how to design cost-utility models for RA to conduct these evaluations, less has been written about the sources of data populating those models. The goal is to review the literature and to provide recommendations for future data collection efforts. This study reviewed RA cost-utility models published between January 2006 and February 2014 focusing on five key sources of data (health-related quality-of-life and utility, clinical outcomes, disease progression, course of treatment, and healthcare resource use and costs). It provided recommendations for collecting the appropriate data during clinical and other studies to support modeling of biologic treatments for RA. Twenty-four publications met the selection criteria. Almost all used two steps to convert clinical outcomes data to utilities rather than more direct methods; most did not use clinical outcomes measures that captured absolute levels of disease activity and physical functioning; one-third of them, in contrast with clinical reality, assumed zero disease progression for biologic-treated patients; little more than half evaluated courses of treatment reflecting guideline-based or actual clinical care; and healthcare resource use and cost data were often incomplete. Based on these findings, it is recommended that future studies collect clinical outcomes and health-related quality-of-life data using appropriate instruments that can convert directly to utilities; collect data on actual disease progression; be designed to capture real-world courses of treatment; and collect detailed data on a wide range of healthcare resources and costs.
Evaluation model of wind energy resources and utilization efficiency of wind farm
Ma, Jie
2018-04-01
Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.
Parametric level correlations in random-matrix models
International Nuclear Information System (INIS)
Weidenmueller, Hans A
2005-01-01
We show that parametric level correlations in random-matrix theories are closely related to a breaking of the symmetry between the advanced and the retarded Green functions. The form of the parametric level correlation function is the same as for the disordered case considered earlier by Simons and Altshuler and is given by the graded trace of the commutator of the saddle-point solution with the particular matrix that describes the symmetry breaking in the actual case of interest. The strength factor differs from the case of disorder. It is determined solely by the Goldstone mode. It is essentially given by the number of levels that are strongly mixed as the external parameter changes. The factor can easily be estimated in applications
Waran, V; Pancharatnam, Devaraj; Thambinayagam, Hari Chandran; Raman, Rajagopal; Rathinam, Alwin Kumar; Balakrishnan, Yuwaraj Kumar; Tung, Tan Su; Rahman, Z A
2014-01-01
Navigation in neurosurgery has expanded rapidly; however, suitable models to train end users to use the myriad software and hardware that come with these systems are lacking. Utilizing three-dimensional (3D) industrial rapid prototyping processes, we have been able to create models using actual computed tomography (CT) data from patients with pathology and use these models to simulate a variety of commonly performed neurosurgical procedures with navigation systems. To assess the possibility of utilizing models created from CT scan dataset obtained from patients with cranial pathology to simulate common neurosurgical procedures using navigation systems. Three patients with pathology were selected (hydrocephalus, right frontal cortical lesion, and midline clival meningioma). CT scan data following an image-guidance surgery protocol in DIACOM format and a Rapid Prototyping Machine were taken to create the necessary printed model with the corresponding pathology embedded. The ability in registration, planning, and navigation of two navigation systems using a variety of software and hardware provided by these platforms was assessed. We were able to register all models accurately using both navigation systems and perform the necessary simulations as planned. Models with pathology utilizing 3D rapid prototyping techniques accurately reflect data of actual patients and can be used in the simulation of neurosurgical operations using navigation systems. Georg Thieme Verlag KG Stuttgart · New York.
Directory of Open Access Journals (Sweden)
Ryan S. Renslow
2017-06-01
Full Text Available In this study, we developed a two-dimensional mathematical model to predict substrate utilization and metabolite production rates in Shewanella oneidensis MR-1 biofilm in the presence and absence of uranium (U. In our model, lactate and fumarate are used as the electron donor and the electron acceptor, respectively. The model includes the production of extracellular polymeric substances (EPS. The EPS bound to the cell surface and distributed in the biofilm were considered bound EPS (bEPS and loosely associated EPS (laEPS, respectively. COMSOL® Multiphysics finite element analysis software was used to solve the model numerically (model file provided in the Supplementary Material. The input variables of the model were the lactate, fumarate, cell, and EPS concentrations, half saturation constant for fumarate, and diffusion coefficients of the substrates and metabolites. To estimate unknown parameters and calibrate the model, we used a custom designed biofilm reactor placed inside a nuclear magnetic resonance (NMR microimaging and spectroscopy system and measured substrate utilization and metabolite production rates. From these data we estimated the yield coefficients, maximum substrate utilization rate, half saturation constant for lactate, stoichiometric ratio of fumarate and acetate to lactate and stoichiometric ratio of succinate to fumarate. These parameters are critical to predicting the activity of biofilms and are not available in the literature. Lastly, the model was used to predict uranium immobilization in S. oneidensis MR-1 biofilms by considering reduction and adsorption processes in the cells and in the EPS. We found that the majority of immobilization was due to cells, and that EPS was less efficient at immobilizing U. Furthermore, most of the immobilization occurred within the top 10 μm of the biofilm. To the best of our knowledge, this research is one of the first biofilm immobilization mathematical models based on experimental
Directory of Open Access Journals (Sweden)
Hongqiang Liu
2016-06-01
Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.
DEFF Research Database (Denmark)
Ruban, Andrei; Simak, S.I.; Shallcross, S.
2003-01-01
We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri......-rich CuAu alloys shows that the model yields a quantitatively accurate description of the relaxtion energies in these systems. Finally, we discuss the bond length distribution in random alloys....
A dynamic random effects multinomial logit model of household car ownership
DEFF Research Database (Denmark)
Bue Bjørner, Thomas; Leth-Petersen, Søren
2007-01-01
Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...
Björneklett, Helena Granstam; Rosenblad, Andreas; Lindemalm, Christina; Ojutkangas, Marja-Leena; Letocha, Henry; Strang, Peter; Bergkvist, Leif
2013-01-01
More than 50% of breast cancer patients are diagnosed before the age of 65. Returning to work after treatment is, therefore, of interest for both the individual and society. The aim was to study the effect of support group intervention on sick leave and health care utilization in economic terms. Of 382 patients with newly diagnosed breast cancer, 191 + 191 patients were randomized to an intervention group or to a routine control group, respectively. The intervention group received support intervention on a residential basis for one week, followed by four days of follow-up two months later. The support intervention included informative-educational sections, relaxation training, mental visualization and non-verbal communication. Patients answered a questionnaire at baseline, two, six and 12 months about sick leave and health care utilization. There was a trend towards longer sick leave and more health care utilization in the intervention group. The difference in total costs was statistically significantly higher in the intervention group after 12 months (p = 0.0036). Costs to society were not reduced with intervention in its present form.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
Frew, Paula M; Kriss, Jennifer L; Chamberlain, Allison T; Malik, Fauzia; Chung, Yunmi; Cortés, Marielysse; Omer, Saad B
2016-08-02
We sought to examine the effectiveness of persuasive communication interventions on influenza vaccination uptake among black/African American pregnant women in Atlanta, Georgia. We recruited black/African American pregnant women ages 18 to 50 y from Atlanta, GA to participate in a prospective, randomized controlled trial of influenza immunization messaging conducted from January to April 2013. Eligible participants were randomized to 3 study arms. We conducted follow-up questionnaires on influenza immunization at 30-days post-partum with all groups. Chi-square and t tests evaluated group differences, and outcome intention-to-treat assessment utilized log-binomial regression models. Of the 106 enrolled, 95 women completed the study (90% retention), of which 31 were randomly assigned to affective messaging intervention ("Pregnant Pause" video), 30 to cognitive messaging intervention ("Vaccines for a Healthy Pregnancy" video), and 34 to a comparison condition (receipt of the Influenza Vaccine Information Statement). The three groups were balanced on baseline demographic characteristics and reported health behaviors. At baseline, most women (63%, n = 60) reported no receipt of seasonal influenza immunization during the previous 5 y. They expressed a low likelihood (2.1 ± 2.8 on 0-10 scale) of obtaining influenza immunization during their current pregnancy. At 30-days postpartum follow-up, influenza immunization was low among all participants (7-13%) demonstrating no effect after a single exposure to either affective messaging (RR = 1.10; 95% CI: 0.30-4.01) or cognitive messaging interventions (RR = 0.57; 95% CI: 0.11-2.88). Women cited various reasons for not obtaining maternal influenza immunizations. These included concern about vaccine harm (47%, n = 40), low perceived influenza infection risk (31%, n = 26), and a history of immunization nonreceipt (24%, n = 20). The findings reflect the limitations associated with a single exposure to varying maternal influenza
Bhandari, Gajananda P; Subedi, Narayan; Thapa, Janak; Choulagai, Bishnu; Maskey, Mahesh K; Onta, Sharad R
2014-03-19
Nepal is on track to achieve MDG 5 but there is a huge sub-national disparity with existing high maternal mortality in western and hilly regions. The national priority is to reduce this disparity to achieve the goal at sub-national level. Evidences from developing countries show that increasing utilization of skilled attendant at birth is an important indicator for reducing maternal death. Further, there is a very low utilization during childbirth in western and hilly regions of Nepal which clearly depicts the barriers in utilization of skilled birth attendants. So, there is a need to overcome the identified barriers to increase the utilization thereby decreasing the maternal mortality. The hypothesis of this study is that through a package of interventions the utilization of skilled birth attendants will be increased and hence improve maternal health in Nepal. This study involves a cluster randomized controlled trial involving approximately 5000 pregnant women in 36 clusters. The 18 intervention clusters will receive the following interventions: i) mobilization of family support for pregnant women to reach the health facility, ii) availability of emergency funds for institutional childbirth, iii) availability of transport options to reach a health facility for childbirth, iv) training to health workers on communication skills, v) security provisions for SBAs to reach services 24/24 through community mobilization; 18 control clusters will not receive the intervention package. The final evaluation of the intervention is planned to be completed by October 2014. Primary study output of this study is utilization of SBA services. Secondary study outputs measure the uptake of antenatal care, post natal checkup for mother and baby, availability of transportation for childbirth, operation of emergency fund, improved reception of women at health services, and improved physical security of SBAs. The intervention package is designed to increase the utilization of skilled
Energy Technology Data Exchange (ETDEWEB)
None, None
2016-05-01
Net-energy metering (NEM) with volumetric retail electricity pricing has enabled rapid proliferation of distributed photovoltaics (DPV) in the United States. However, this transformation is raising concerns about the potential for higher electricity rates and cost-shifting to non-solar customers, reduced utility shareholder profitability, reduced utility earnings opportunities, and inefficient resource allocation. Although DPV deployment in most utility territories remains too low to produce significant impacts, these concerns have motivated real and proposed reforms to utility regulatory and business models, with profound implications for future DPV deployment. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy’s SunShot Initiative. As such, the report focuses on a subset of a broader range of reforms underway in the electric utility sector. Drawing on original analysis and existing literature, we analyze the significance of DPV’s financial impacts on utilities and non-solar ratepayers under current NEM rules and rate designs, the projected effects of proposed NEM and rate reforms on DPV deployment, and alternative reforms that could address utility and ratepayer concerns while supporting continued DPV growth. We categorize reforms into one or more of four conceptual strategies. Understanding how specific reforms map onto these general strategies can help decision makers identify and prioritize options for addressing specific DPV concerns that balance stakeholder interests.
A spatial error model with continuous random effects and an application to growth convergence
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
Soneson, Joshua E
2017-04-01
Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.
Energy Technology Data Exchange (ETDEWEB)
Mendelsohn, M.; Kreycik, C.
2012-04-01
Utility-scale solar projects have grown rapidly in number and size over the last few years, driven in part by strong renewable portfolio standards (RPS) and federal incentives designed to stimulate investment in renewable energy technologies. This report provides an overview of such policies, as well as the project financial structures they enable, based on industry literature, publicly available data, and questionnaires conducted by the National Renewable Energy Laboratory (NREL).
Directory of Open Access Journals (Sweden)
Pawlowski Marcin
2012-11-01
Full Text Available Abstract Background Computational models of protein structures were proved to be useful as search models in Molecular Replacement (MR, a common method to solve the phase problem faced by macromolecular crystallography. The success of MR depends on the accuracy of a search model. Unfortunately, this parameter remains unknown until the final structure of the target protein is determined. During the last few years, several Model Quality Assessment Programs (MQAPs that predict the local accuracy of theoretical models have been developed. In this article, we analyze whether the application of MQAPs improves the utility of theoretical models in MR. Results For our dataset of 615 search models, the real local accuracy of a model increases the MR success ratio by 101% compared to corresponding polyalanine templates. On the contrary, when local model quality is not utilized in MR, the computational models solved only 4.5% more MR searches than polyalanine templates. For the same dataset of the 615 models, a workflow combining MR with predicted local accuracy of a model found 45% more correct solution than polyalanine templates. To predict such accuracy MetaMQAPclust, a “clustering MQAP” was used. Conclusions Using comparative models only marginally increases the MR success ratio in comparison to polyalanine structures of templates. However, the situation changes dramatically once comparative models are used together with their predicted local accuracy. A new functionality was added to the GeneSilico Fold Prediction Metaserver in order to build models that are more useful for MR searches. Additionally, we have developed a simple method, AmIgoMR (Am I good for MR?, to predict if an MR search with a template-based model for a given template is likely to find the correct solution.
Rosenbaum, D L; Piers, A D; Schumacher, L M; Kase, C A; Butryn, M L
2017-07-01
Many racial and ethnic minority groups (minorities) are disproportionately affected by overweight and obesity; however, minorities are often under-represented in clinical trials of behavioural weight loss (BWL) treatment, potentially limiting the generalizability of these trials' conclusions. Interventions involving technology may be particularly well suited to overcoming the barriers to minority enrollment in BWL trials, such as demanding or unpredictable work schedules, caregiving responsibilities and travel burdens. Thus, this systematic review aimed to describe minority enrollment in trials utilizing technology in interventions, as well as to identify which form(s) of technology yield the highest minority enrollment. Results indicated relatively low enrollment of minorities. Trials integrating smartphone use exhibited significantly greater racial minority enrollment than trials that did not; trials with both smartphone and in-person components exhibited the highest racial minority enrollment. This review is the first to explore how the inclusion of technology in BWL trials relates to minority enrollment and can help address the need to improve minority enrollment in weight loss research. © 2017 World Obesity Federation.
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
A simple model of global cascades on random networks
Watts, Duncan J.
2002-04-01
The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascadesherein called global cascadesthat occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.
On the use of prior information in modelling metabolic utilization of energy in growing pigs
DEFF Research Database (Denmark)
Strathe, Anders Bjerring; Jørgensen, Henry; Fernández, José Adalberto
2011-01-01
Construction of models that provide a realistic representation of metabolic utilization of energy in growing animals tend to be over-parameterized because data generated from individual metabolic studies are often sparse. In the Bayesian framework prior information can enter the data analysis......, PD and LD) made on a given pig at a given time followed a multivariate normal distribution. Two different equation systems were adopted from Strathe et al. (2010), generating the expected values in the multivariate normal distribution. Non-informative prior distributions were assigned for all model......, kp and kf, respectively. Utilizing both sets of priors showed that the maintenance component was sensitive to the statement of prior belief and, hence, that the estimate of 0.91 MJkg0.60d1 (95% CI: 0.78; 1.09) should be interpreted with caution. It was shown that boars were superior in depositing...
International Nuclear Information System (INIS)
Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en
2015-01-01
Highlights: • A complete bioenergy flow is schemed to industrialize bioenergy utilization. • An input–output optimization simulation model is developed. • Energy supply and demand and bioenergy industries’ development are optimized. • Carbon tax and subsidies are endogenously derived by the model. • Environmental economic benefits of bioenergy utilization are explored dynamically. - Abstract: This paper outlines a complete bioenergy flow incorporating bioresource procurement, feedstock supply, conversion technologies and energy consumption to industrialize the development and utilization of bioenergy. An input–output optimization simulation model is developed to introduce bioenergy industries into the regional socioeconomy and energy production and consumption system and dynamically explore the economic, energy and environmental benefits. 16-term simulation from 2010 to 2025 is performed in scenarios preset based on bioenergy industries, carbon tax-subsidization policy and distinct levels of greenhouse gas emission constraints. An empirical study is conducted to validate and apply the model. In the optimal scenario, both industrial development and energy supply and demand are optimized contributing to a 8.41% average gross regional product growth rate and a 39.9% reduction in accumulative greenhouse gas emission compared with the base scenario. By 2025 the consumption ratio of bioenergy in total primary energy could be increased from 0.5% to 8.2%. Energy self-sufficiency rate could be increased from 57.7% to 77.9%. A dynamic carbon tax rate and the extent to which bioenergy industrial development could be promoted are also elaborated. Regional economic development and greenhouse gas mitigation can be potentially promoted simultaneously by bioenergy utilization and a proper greenhouse gas emission constraint. The methodology presented is capable of introducing new industries or policies related to energy planning and detecting the best tradeoffs of
Changes in fibrinogen availability and utilization in an animal model of traumatic coagulopathy
DEFF Research Database (Denmark)
Hagemo, Jostein S; Jørgensen, Jørgen; Ostrowski, Sisse R
2013-01-01
Impaired haemostasis following shock and tissue trauma is frequently detected in the trauma setting. These changes occur early, and are associated with increased mortality. The mechanism behind trauma-induced coagulopathy (TIC) is not clear. Several studies highlight the crucial role of fibrinogen...... in posttraumatic haemorrhage. This study explores the coagulation changes in a swine model of early TIC, with emphasis on fibrinogen levels and utilization of fibrinogen....
Utility Function for modeling Group Multicriteria Decision Making problems as games
Alexandre Bevilacqua Leoneti
2016-01-01
To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...
Papadopoulos, Christos A; Vouros, Ioannis; Menexes, Georgios; Konstantinidis, Antonis
2015-11-01
A comparison of different treatment modalities of peri-implantitis can lead to the development and application of more effective and efficient methods of therapy in clinical practice. This study compares the effectiveness of open flap debridement used alone, with an approach employing the additional use of a diode laser for the treatment of peri-implantitis. Nineteen patients were divided into two groups and treated for peri-implantitis. In the control group (C group), the therapy utilized access flaps, plastic curettes, and sterilized gauzes soaked in saline. The test group (L group) was treated similarly but with additional irradiation using a diode laser. The parameters studied were pocket depth (PD) as the primary outcome variable, clinical attachment level (CAL), bleeding on probing (BOP), and plaque index (PI) as secondary variables. Measurements were performed at three different time points, baseline (BSL), 3 months, and 6 months after treatment. Three months after treatment, a mean PD reduction of 1.19 mm for the control group and 1.38 mm for the laser group was recorded. The corresponding BOP changes were 72.9 and 66.7%, respectively. These changes were significant and remained at the same levels at the 6-month examination (p Surgical treatment of peri-implantitis by access flaps leads to improvement of all clinical parameters studied while the additional use of diode laser does not seem to have an extra beneficiary effect. The additional use of a diode laser in the surgical treatment of peri-implantitis offers a limited clinical benefit.
A cellular automata model of traffic flow with variable probability of randomization
International Nuclear Information System (INIS)
Zheng Wei-Fan; Zhang Ji-Ye
2015-01-01
Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow–density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. (paper)
Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran
2012-07-01
1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen
2011-11-01
Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.
International Nuclear Information System (INIS)
Rios, Paulo R; Assis, Weslley L S; Ribeiro, Tatiana C S; Villa, Elena
2012-01-01
In a classical paper, Cahn derived expressions for the kinetics of transformations nucleated on random planes and lines. He used those as a model for nucleation on the boundaries, edges and vertices of a polycrystal consisting of equiaxed grains. In this paper it is demonstrated that Cahn's expression for random planes may be used in situations beyond the scope envisaged in Cahn's original paper. For instance, we derived an expression for the kinetics of transformations nucleated on random parallel planes that is identical to that formerly obtained by Cahn considering random planes. Computer simulation of transformations nucleated on random parallel planes is carried out. It is shown that there is excellent agreement between simulated results and analytical solutions. Such an agreement is to be expected if both the simulation and the analytical solution are correct. (paper)
Directory of Open Access Journals (Sweden)
Mansoor Ahmed Siddiqui
2017-06-01
Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.
Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities
Energy Technology Data Exchange (ETDEWEB)
Facchinetti, Emanuele, E-mail: emanuele.facchinetti@hslu.ch [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland); Eid, Cherrelle [Faculty of Technology, Policy and Management, Delft University of Technology, Delft (Netherlands); Bollinger, Andrew [Urban Energy Systems Laboratory, EMPA, Dübendorf (Switzerland); Sulzer, Sabine [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland)
2016-08-04
The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.
Business model innovation for Local Energy Management: a perspective from Swiss utilities
Directory of Open Access Journals (Sweden)
Emanuele Facchinetti
2016-08-01
Full Text Available The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential Local Energy Management stakeholders and policy makers a conceptual framework guiding the Local Energy Management business model innovation. The main determinants characterizing Local Energy Management concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the Local Energy Management business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights to policy makers on challenges and opportunities related to Local Energy Management.
Business Model Innovation for Local Energy Management: A Perspective from Swiss Utilities
International Nuclear Information System (INIS)
Facchinetti, Emanuele; Eid, Cherrelle; Bollinger, Andrew; Sulzer, Sabine
2016-01-01
The successful deployment of the energy transition relies on a deep reorganization of the energy market. Business model innovation is recognized as a key driver of this process. This work contributes to this topic by providing to potential local energy management (LEM) stakeholders and policy makers a conceptual framework guiding the LEM business model innovation. The main determinants characterizing LEM concepts and impacting its business model innovation are identified through literature reviews on distributed generation typologies and customer/investor preferences related to new business opportunities emerging with the energy transition. Afterwards, the relation between the identified determinants and the LEM business model solution space is analyzed based on semi-structured interviews with managers of Swiss utilities companies. The collected managers’ preferences serve as explorative indicators supporting the business model innovation process and provide insights into policy makers on challenges and opportunities related to LEM.
Lim, Byung Gun; Lee, Il Ok; Kim, Young Sung; Won, Young Ju; Kim, Heezoo; Kong, Myoung Hoon
2017-01-01
This study was designed to determine whether a deep hypnotic state with a bispectral index (BIS) value less than 40 could alleviate withdrawal movement (WM) upon rocuronium injection during anesthesia induction in children. Finally, 135 healthy children (3-12 years) scheduled for minor elective surgery were studied. Without premedication, anesthesia was induced with thiopental sodium 5 mg/kg. Patients were randomized into 2 groups (control vs experimental) and then by virtue of rocuronium injection time, patients in the experimental group were allocated into 2 groups, as follows: in the control group (group C; n = 45), rocuronium 0.6 mg/kg was administered at the loss of eyelash reflex; in the 1st experimental group, rocuronium 0.6 mg/kg was administered when BIS fell to less than 40 (group T; n = 45); however, if BIS did not fall below 40 after thiopental sodium administration, manual ventilation was provided with oxygen 6 L/minute using sevoflurane 8% and then rocuronium was administered when BIS fell below 40 (the 2nd experimental group, group S; n = 45). Rocuronium-induced WM was evaluated using a 4-point scale (no movement; movement/withdrawal involving the arm only; generalized response, with movement/withdrawal of more than 1 extremity, but no requirement for restraint of the body; and generalized response which required restraint of the body and caused coughing or breath-holding). No significant differences were found among the groups for patient characteristics including age, sex, height, and location of venous cannula. However, body weight, height, and body mass index in group S were all smaller than those in group T. The incidence of WM caused by rocuronium was 100% in group C, 95.6% in group T, and 80% in group S, and was significantly lower in group S than in group C. The grade of WM was 3.7 ± 0.6 in group C, 3.2 ± 0.9 in group T, and 2.6 ± 1.0 in group S. It was significantly lower in group T than in group C and
Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E
2014-06-01
Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.
Random fluid limit of an overloaded polling model
M. Frolkova (Masha); S.G. Foss (Sergey); A.P. Zwart (Bert)
2014-01-01
htmlabstractIn the present paper, we study the evolution of an overloaded cyclic polling model that starts empty. Exploiting a connection with multitype branching processes, we derive fluid asymptotics for the joint queue length process. Under passage to the fluid dynamics, the server switches
Random fluid limit of an overloaded polling model
M. Frolkova (Masha); S.G. Foss (Sergey); A.P. Zwart (Bert)
2013-01-01
htmlabstractIn the present paper, we study the evolution of an~overloaded cyclic polling model that starts empty. Exploiting a~connection with multitype branching processes, we derive fluid asymptotics for the joint queue length process. Under passage to the fluid dynamics, the server switches
Multilevel random effect and marginal models for longitudinal data ...
African Journals Online (AJOL)
The models were applied to data obtained from a phase-III clinical trial on a new meningococcal vaccine. The goal is to investigate whether children injected by the candidate vaccine have a lower or higher risk for the occurrence of specific adverse events than children injected with licensed vaccine, and if so, to quantify the ...
Susceptibility and magnetization of a random Ising model
Energy Technology Data Exchange (ETDEWEB)
Kumar, D; Srivastava, V [Roorkee Univ. (India). Dept. of Physics
1977-08-01
The susceptibility of a bond disordered Ising model is calculated by configurationally averaging an Ornstein-Zernike type of equation for the two spin correlation function. The equation for the correlation function is derived using a diagrammatic method due to Englert. The averaging is performed using bond CPA. The magnetization is also calculated by averaging in a similar manner a linearised molecular field equation.
Spectra of Anderson type models with decaying randomness
Indian Academy of Sciences (India)
Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45
Our models include potentials decaying in all directions in which case ..... the free operators with some uniform bounds of low moments of the measure µ weighted ..... We have the following inequality coming out of Cauchy–Schwarz and Fubini, ... The required statement on the limit follows if we now show that the quantity in ...
Modeling and optimization of a utility system containing multiple extractions steam turbines
International Nuclear Information System (INIS)
Luo, Xianglong; Zhang, Bingjian; Chen, Ying; Mo, Songping
2011-01-01
Complex turbines with multiple controlled and/or uncontrolled extractions are popularly used in the processing industry and cogeneration plants to provide steam of different levels, electric power, and driving power. To characterize thermodynamic behavior under varying conditions, nonlinear mathematical models are developed based on energy balance, thermodynamic principles, and semi-empirical equations. First, the complex turbine is decomposed into several simple turbines from the controlled extraction stages and modeled in series. THM (The turbine hardware model) developing concept is applied to predict the isentropic efficiency of the decomposed simple turbines. Stodola's formulation is also used to simulate the uncontrolled extraction steam parameters. The thermodynamic properties of steam and water are regressed through linearization or piece-wise linearization. Second, comparison between the simulated results using the proposed model and the data in the working condition diagram provided by the manufacturer is conducted over a wide range of operations. The simulation results yield small deviation from the data in the working condition diagram where the maximum modeling error is 0.87% among the compared seven operation conditions. Last, the optimization model of a utility system containing multiple extraction turbines is established and a detailed case is analyzed. Compared with the conventional operation strategy, a maximum of 5.47% of the total operation cost is saved using the proposed optimization model. -- Highlights: → We develop a complete simulation model for steam turbine with multiple extractions. → We test the simulation model using the performance data of commercial turbines. → The simulation error of electric power generation is no more than 0.87%. → We establish a utility system operational optimization model. → The optimal industrial operation scheme featured with 5.47% of cost saving.
Restoration of dimensional reduction in the random-field Ising model at five dimensions
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D equality at all studied dimensions.
A single-level random-effects cross-lagged panel model for longitudinal mediation analysis.
Wu, Wei; Carroll, Ian A; Chen, Po-Yi
2017-12-06
Cross-lagged panel models (CLPMs) are widely used to test mediation with longitudinal panel data. One major limitation of the CLPMs is that the model effects are assumed to be fixed across individuals. This assumption is likely to be violated (i.e., the model effects are random across individuals) in practice. When this happens, the CLPMs can potentially yield biased parameter estimates and misleading statistical inferences. This article proposes a model named a random-effects cross-lagged panel model (RE-CLPM) to account for random effects in CLPMs. Simulation studies show that the RE-CLPM outperforms the CLPM in recovering the mean indirect and direct effects in a longitudinal mediation analysis when random effects exist in the population. The performance of the RE-CLPM is robust to a certain degree, even when the random effects are not normally distributed. In addition, the RE-CLPM does not produce harmful results when the model effects are in fact fixed in the population. Implications of the simulation studies and potential directions for future research are discussed.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.
Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot
2006-05-01
Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.
Janssen, Hans-Karl; Stenull, Olaf
2004-02-01
We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).
Application of the load flow and random flow models for the analysis of power transmission networks
International Nuclear Information System (INIS)
Zio, Enrico; Piccinelli, Roberta; Delfanti, Maurizio; Olivieri, Valeria; Pozzi, Mauro
2012-01-01
In this paper, the classical load flow model and the random flow model are considered for analyzing the performance of power transmission networks. The analysis concerns both the system performance and the importance of the different system elements; this latter is computed by power flow and random walk betweenness centrality measures. A network system from the literature is analyzed, representing a simple electrical power transmission network. The results obtained highlight the differences between the LF “global approach” to flow dispatch and the RF local approach of randomized node-to-node load transfer. Furthermore, computationally the LF model is less consuming than the RF model but problems of convergence may arise in the LF calculation.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Directory of Open Access Journals (Sweden)
Hideki Katagiri
2017-10-01
Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.
Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B
1991-01-01
This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated healt...
Model of Random Polygon Particles for Concrete and Mesh Automatic Subdivision
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
In order to study the constitutive behavior of concrete in mesoscopic level, a new method is proposed in this paper. This method uses random polygon particles to simulate full grading broken aggregates of concrete. Based on computational geometry, we carry out the automatic generation of the triangle finite element mesh for the model of random polygon particles of concrete. The finite element mesh generated in this paper is also applicable to many other numerical methods.
Covariance of random stock prices in the Stochastic Dividend Discount Model
Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico
2016-01-01
Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Phase transitions in the random field Ising model in the presence of a transverse field
Energy Technology Data Exchange (ETDEWEB)
Dutta, A.; Chakrabarti, B.K. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Stinchcombe, R.B. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Department of Physics, Oxford (United Kingdom)
1996-09-07
We have studied the phase transition behaviour of the random field Ising model in the presence of a transverse (or tunnelling) field. The mean field phase diagram has been studied in detail, and in particular the nature of the transition induced by the tunnelling (transverse) field at zero temperature. Modified hyper-scaling relation for the zero-temperature transition has been derived using the Suzuki-Trotter formalism and a modified 'Harris criterion'. Mapping of the model to a randomly diluted antiferromagnetic Ising model in uniform longitudinal and transverse field is also given. (author)
Equilibrium in a random viewer model of television broadcasting
DEFF Research Database (Denmark)
Hansen, Bodil Olai; Keiding, Hans
2014-01-01
The authors considered a model of commercial television market with advertising with probabilistic viewer choice of channel, where private broadcasters may coexist with a public television broadcaster. The broadcasters influence the probability of getting viewer attention through the amount...... number of channels. The authors derive properties of equilibrium in an oligopolistic market with private broadcasters and show that the number of firms has a negative effect on overall advertising and viewer satisfaction. If there is a public channel that also sells advertisements but does not maximize...... profits, this will have a positive effect on advertiser and viewer satisfaction....
Silkworm cocoons inspire models for random fiber and particulate composites
Energy Technology Data Exchange (ETDEWEB)
Fujia, Chen; Porter, David; Vollrath, Fritz [Department of Zoology, University of Oxford, Oxford OX1 3PS (United Kingdom)
2010-10-15
The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials.
Utilization of Model Predictive Control to Balance Power Absorption Against Load Accumulation
Energy Technology Data Exchange (ETDEWEB)
Abbas, Nikhar [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tom, Nathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-06-03
Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.
Energy Technology Data Exchange (ETDEWEB)
Abbas, Nikhar; Tom, Nathan
2017-09-01
Wave energy converter (WEC) control strategies have been primarily focused on maximizing power absorption. The use of model predictive control strategies allows for a finite-horizon, multiterm objective function to be solved. This work utilizes a multiterm objective function to maximize power absorption while minimizing the structural loads on the WEC system. Furthermore, a Kalman filter and autoregressive model were used to estimate and forecast the wave exciting force and predict the future dynamics of the WEC. The WEC's power-take-off time-averaged power and structural loads under a perfect forecast assumption in irregular waves were compared against results obtained from the Kalman filter and autoregressive model to evaluate model predictive control performance.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The
A random effects meta-analysis model with Box-Cox transformation
Directory of Open Access Journals (Sweden)
Yusuke Yamaguchi
2017-07-01
Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and
Sava, Florin A; Yates, Brian T; Lupu, Viorel; Szentagotai, Aurora; David, Daniel
2009-01-01
Cost-effectiveness and cost-utility of cognitive therapy (CT), rational emotive behavioral therapy (REBT), and fluoxetine (Prozac) for major depressive disorder (MDD) were compared in a randomized clinical trial with a Romanian sample of 170 clients. Each intervention was offered for 14 weeks, plus three booster sessions. Beck Depression Inventory (BDI) scores were obtained prior to intervention, 7 and 14 weeks following the start of intervention, and 6 months following completion of intervention. CT, REBT, and fluoxetine did not differ significantly in changes in the BDI, depression-free days (DFDs), or Quality-Adjusted Life Years (QALYs). Average BDI scores decreased from 31.1 before treatment to 9.7 six months following completion of treatment. Due to lower costs, both psychotherapies were more cost-effective, and had better cost-utility, than pharmacotherapy: median $26.44/DFD gained/month for CT and $23.77/DFD gained/month for REBT versus $34.93/DFD gained/month for pharmacotherapy, median $/QALYs=$1,638, $1,734, and $2,287 for CT, REBT, and fluoxetine (Prozac), respectively. (c) 2008 Wiley Periodicals, Inc.
Potts Model with Invisible Colors : Random-Cluster Representation and Pirogov–Sinai Analysis
Enter, Aernout C.D. van; Iacobelli, Giulio; Taati, Siamak
We study a recently introduced variant of the ferromagnetic Potts model consisting of a ferromagnetic interaction among q “visible” colors along with the presence of r non-interacting “invisible” colors. We introduce a random-cluster representation for the model, for which we prove the existence of
P2 : A random effects model with covariates for directed graphs
van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.
A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.
Random Walk Model for the Growth of Monolayer in Dip Pen Nanolithography
International Nuclear Information System (INIS)
Kim, H; Ha, S; Jang, J
2013-01-01
By using a simple random-walk model, we simulate the growth of a self-assembled monolayer (SAM) pattern generated in dip pen nanolithography (DPN). In this model, the SAM pattern grows mainly via the serial pushing of molecules deposited from the tip. We examine various SAM patterns, such as lines, crosses, and letters by changing the tip scan speed.
Examples of mixed-effects modeling with crossed random effects and with binomial data
Quené, H.; van den Bergh, H.
2008-01-01
Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not
Thiene, M.; Boeri, M.; Chorus, C.G.
2011-01-01
This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the
A binomial random sum of present value models in investment analysis
Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος
1997-01-01
Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.
The limiting behavior of the estimated parameters in a misspecified random field regression model
DEFF Research Database (Denmark)
Dahl, Christian Møller; Qin, Yu
This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection of n...
Directory of Open Access Journals (Sweden)
Chaudhari Monica
2012-07-01
Full Text Available Abstract Background About one-third of adults with diabetes have severe oral complications. However, limited previous research has investigated dental care utilization associated with diabetes. This project had two purposes: to develop a methodology to estimate dental care utilization using claims data and to use this methodology to compare utilization of dental care between adults with and without diabetes. Methods Data included secondary enrollment and demographic data from Washington Dental Service (WDS and Group Health Cooperative (GH, clinical data from GH, and dental-utilization data from WDS claims during 2002–2006. Dental and medical records from WDS and GH were linked for enrolees continuously and dually insured during the study. We employed hurdle models in a quasi-experimental setting to assess differences between adults with and without diabetes in 5-year cumulative utilization of dental services. Propensity score matching adjusted for differences in baseline covariates between the two groups. Results We found that adults with diabetes had lower odds of visiting a dentist (OR = 0.74, p 0.001. Among those with a dental visit, diabetes patients had lower odds of receiving prophylaxes (OR = 0.77, fillings (OR = 0.80 and crowns (OR = 0.84 (p 0.005 for all and higher odds of receiving periodontal maintenance (OR = 1.24, non-surgical periodontal procedures (OR = 1.30, extractions (OR = 1.38 and removable prosthetics (OR = 1.36 (p Conclusions Patients with diabetes are less likely to use dental services. Those who do are less likely to use preventive care and more likely to receive periodontal care and tooth-extractions. Future research should address the possible effectiveness of additional prevention in reducing subsequent severe oral disease in patients with diabetes.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models
Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura
2014-09-01
Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.
Mathematical models utilized in the retrieval of displacement information encoded in fringe patterns
Sciammarella, Cesar A.; Lamberti, Luciano
2016-02-01
All the techniques that measure displacements, whether in the range of visible optics or any other form of field methods, require the presence of a carrier signal. A carrier signal is a wave form modulated (modified) by an input, deformation of the medium. A carrier is tagged to the medium under analysis and deforms with the medium. The wave form must be known both in the unmodulated and the modulated conditions. There are two basic mathematical models that can be utilized to decode the information contained in the carrier, phase modulation or frequency modulation, both are closely connected. Basic problems connected to the detection and recovery of displacement information that are common to all optical techniques will be analyzed in this paper, focusing on the general theory common to all the methods independently of the type of signal utilized. The aspects discussed are those that have practical impact in the process of data gathering and data processing.
Energy Utilization Evaluation of Carbon Performance in Public Projects by FAHP and Cloud Model
Directory of Open Access Journals (Sweden)
Lin Li
2016-07-01
Full Text Available With the low-carbon economy advocated all over the world, how to use energy reasonably and efficiently in public projects has become a major issue. It has brought many open questions, including which method is more reasonable in evaluating the energy utilization of carbon performance in public projects when the evaluation information is fuzzy; whether an indicator system can be constructed; and which indicators have more impact on carbon performance. This article aims to solve these problems. We propose a new carbon performance evaluation system for energy utilization based on project processes (design, construction, and operation. Fuzzy Analytic Hierarchy Process (FAHP is used to accumulate the indicator weights and cloud model is incorporated when the indicator value is fuzzy. Finally, we apply our indicator system to a case study of the Xiangjiang River project in China, which demonstrates the applicability and efficiency of our method.
Statistical Shape Modelling and Markov Random Field Restoration (invited tutorial and exercise)
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen
This tutorial focuses on statistical shape analysis using point distribution models (PDM) which is widely used in modelling biological shape variability over a set of annotated training data. Furthermore, Active Shape Models (ASM) and Active Appearance Models (AAM) are based on PDMs and have proven...... deformation field between shapes. The tutorial demonstrates both generative active shape and appearance models, and MRF restoration on 3D polygonized surfaces. ''Exercise: Spectral-Spatial classification of multivariate images'' From annotated training data this exercise applies spatial image restoration...... using Markov random field relaxation of a spectral classifier. Keywords: the Ising model, the Potts model, stochastic sampling, discriminant analysis, expectation maximization....
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
A random walk model for evaluating clinical trials involving serial observations.
Hopper, J L; Young, G P
1988-05-01
For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.
Reike, Dennis; Schwarz, Wolf
2016-01-01
The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…
A laboratory-calibrated model of coho salmon growth with utility for ecological analyses
Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.
2018-01-01
We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.
Phase structure of the O(n) model on a random lattice for n > 2
DEFF Research Database (Denmark)
Durhuus, B.; Kristjansen, C.
1997-01-01
We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1....../2 or there exists a dual critical point with negative string susceptibility exponent, γ̃, related to γ by γ = γ̃/γ̃-1. Exploiting the exact solution of the O(n) model on a random lattice we show that both situations are realized for n > 2 and that the possible dual pairs of string susceptibility exponents are given...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....
Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Leppaenen, J.
2007-01-01
High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)
Random cyclic constitutive models of 0Cr18Ni10Ti pipe steel
International Nuclear Information System (INIS)
Zhao Yongxiang; Yang Bing
2004-01-01
Experimental study is performed on the random cyclic constitutive relations of a new pipe stainless steel, 0Cr18Ni10Ti, by an incremental strain-controlled fatigue test. In the test, it is verified that the random cyclic constitutive relations, like the wide recognized random cyclic strain-life relations, is an intrinsic fatigue phenomenon of engineering materials. Extrapolating the previous work by Zhao et al, probability-based constitutive models are constructed, respectively, on the bases of Ramberg-Osgood equation and its modified form. Scattering regularity and amount of the test data are taken into account. The models consist of the survival probability-strain-life curves, the confidence strain-life curves, and the survival probability-confidence-strain-life curves. Availability and feasibility of the models have been indicated by analysis of the present test data
Utilization of Short-Simulations for Tuning High-Resolution Climate Model
Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.
2016-12-01
Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in
Utilizing the non-bridge oxygen model to predict the glass viscosity
International Nuclear Information System (INIS)
Choi, Kwansik; Sheng, Jiawei; Maeng, Sung Jun; Song, Myung Jae
1998-01-01
Viscosity is the most important process property of waste glass. Viscosity measurement is difficult and costs much. Non-bridging Oxygen (NBO) model which relates glass composition to viscosity had been developed for high level waste at the Savannah River Site (SRS). This research utilized this NBO model to predict the viscosity of KEPRI's 55 glasses. It was found that there was a linear relationship between the measured viscosity and the predicted viscosity. The NBO model could be used to predict glass viscosity in glass formulation development. However the precision of predicted viscosity is out of satisfaction because the composition ranges are very different between the SRS and KEPRI glasses. The modification of NBO calculation, which included modification of alkaline earth elements and TiO 2 , could not strikingly improve the precision of predicted values
International Nuclear Information System (INIS)
Wang, Jian-Xun; Sun, Rui; Xiao, Heng
2016-01-01
Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past
Self-dual random-plaquette gauge model and the quantum toric code
Takeda, Koujin; Nishimori, Hidetoshi
2004-05-01
We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.
Self-dual random-plaquette gauge model and the quantum toric code
International Nuclear Information System (INIS)
Takeda, Koujin; Nishimori, Hidetoshi
2004-01-01
We study the four-dimensional Z 2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, p c =0.889972..., and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions
Ferrimagnetic Properties of Bond Dilution Mixed Blume-Capel Model with Random Single-Ion Anisotropy
International Nuclear Information System (INIS)
Liu Lei; Yan Shilei
2005-01-01
We study the ferrimagnetic properties of spin 1/2 and spin-1 systems by means of the effective field theory. The system is considered in the framework of bond dilution mixed Blume-Capel model (BCM) with random single-ion anisotropy. The investigation of phase diagrams and magnetization curves indicates the existence of induced magnetic ordering and single or multi-compensation points. Special emphasis is placed on the influence of bond dilution and random single-ion anisotropy on normal or induced magnetic ordering states and single or multi-compensation points. Normal magnetic ordering states take on new phase diagrams with increasing randomness (bond and anisotropy), while anisotropy induced magnetic ordering states are always occurrence no matter whether concentration of anisotropy is large or small. Existence and disappearance of compensation points rely strongly on bond dilution and random single-ion anisotropy. Some results have not been revealed in previous papers and predicted by Neel theory of ferrimagnetism.
International Nuclear Information System (INIS)
Perez, J.F.; Pontin, L.F.; Segundo, J.A.B.
1985-01-01
Using a method proposed by van Hemmen the free energy of the Curie-Weiss version of the site-dilute antiferromagnetic Ising model is computed, in the presence of an uniform magnetic field. The solution displays an exact correspondence between this model and the Curie-Weiss version of the Ising model in the presence of a random magnetic field. The phase diagrams are discussed and a tricritical point is shown to exist. (Author) [pt
Behr, Joshua G.; Diaz, Rafael
Non-urgent Emergency Department utilization has been attributed with increasing congestion in the flow and treatment of patients and, by extension, conditions the quality of care and profitability of the Emergency Department. Interventions designed to divert populations to more appropriate care may be cautiously received by operations managers due to uncertainty about the impact an adopted intervention may have on the two values of congestion and profitability. System Dynamics (SD) modeling and simulation may be used to measure the sensitivity of these two, often-competing, values of congestion and profitability and, thus, provide an additional layer of information designed to inform strategic decision making.
Local cerebral glucose utilization in the beagle puppy model of intraventricular hemorrhage
International Nuclear Information System (INIS)
Ment, L.R.; Stewart, W.B.; Duncan, C.C.
1982-01-01
Local cerebral glucose utilization has been measured by means of carbon-14( 14 C)-autoradiography with 2-deoxyglucose in the newborn beagle puppy model of intraventricular hemorrhage. Our studies demonstrate gray matter/white matter differentiation of uptake of 14 C-2-deoxyglucose in the control pups, as would be expected from adult animal studies. However, there is a marked homogeneity of 14 C-2-deoxyglucose uptake in all brain regions in the puppies with intraventricular hemorrhage, possibly indicating a loss of the known coupling between cerebral blood flow and metabolism in this neuropathological condition
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Random Walk Model for Cell-To-Cell Misalignments in Accelerator Structures
International Nuclear Information System (INIS)
Stupakov, Gennady
2000-01-01
Due to manufacturing and construction errors, cells in accelerator structures can be misaligned relative to each other. As a consequence, the beam generates a transverse wakefield even when it passes through the structure on axis. The most important effect is the long-range transverse wakefield that deflects the bunches and causes growth of the bunch train projected emittance. In this paper, the effect of the cell-to-cell misalignments is evaluated using a random walk model that assumes that each cell is shifted by a random step relative to the previous one. The model is compared with measurements of a few accelerator structures
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
Energy Technology Data Exchange (ETDEWEB)
Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Miller, John [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sigrin, Ben [National Renewable Energy Lab. (NREL), Golden, CO (United States); Reiter, Emerson [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cory, Karlynn [National Renewable Energy Lab. (NREL), Golden, CO (United States); McLaren, Joyce [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Darghouth, Naim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Satchwell, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-05-01
Net-energy metering (NEM) has helped drive the rapid growth of distributed PV (DPV) but has raised concerns about electricity cost shifts, utility financial losses, and inefficient resource allocation. These concerns have motivated real and proposed reforms to utility regulatory and business models. This report explores the challenges and opportunities associated with such reforms in the context of the U.S. Department of Energy's SunShot Initiative. Most of the reforms to date address NEM concerns by reducing the benefits provided to DPV customers and thus constraining DPV deployment. Eliminating NEM nationwide, by compensating exports of PV electricity at wholesale rather than retail rates, could cut cumulative DPV deployment by 20% in 2050 compared with a continuation of current policies. This would slow the PV cost reductions that arise from larger scale and market certainty. It could also thwart achievement of the SunShot deployment goals even if the initiative's cost targets are achieved. This undesirable prospect is stimulating the development of alternative reform strategies that address concerns about distributed PV compensation without inordinately harming PV economics and growth. These alternatives fall into the categories of facilitating higher-value DPV deployment, broadening customer access to solar, and aligning utility profits and earnings with DPV. Specific strategies include utility ownership and financing of DPV, community solar, distribution network operators, services-driven utilities, performance-based incentives, enhanced utility system planning, pricing structures that incentivize high-value DPV configurations, and decoupling and other ratemaking reforms that reduce regulatory lag. These approaches represent near- and long-term solutions for preserving the legacy of the SunShot Initiative.
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
International Nuclear Information System (INIS)
Marmer, G.J.; Policastro, A.J.
1977-01-01
This paper evaluates the preoperational hydrothermal modeling and operational monitoring carried out by utilities as three nuclear-power-plant sites using once-through cooling. Our work was part of a larger study to assess the environmental impact of operating plants for the Nuclear Regulatory Commission (NRC) and the suitability of the NRC Environmental Technical Specifications (Tech Specs) as set up for these plants. The study revealed that the plume mappings at the Kewaunee, Zion, and Quad Cities sites were generally satisfactory in terms of delineating plume size and other characteristics. Unfortunately, monitoring was not carried out during the most critical periods when largest plume size would be expected. At Kewaunee and Zion, preoperational predictions using analytical models were found to be rather poor. At Kewaunee (surface discharge), the Pritchard Model underestimated plume size in the near field, but grossly overestimated the plume's far-field extent. Moreover, lake-level variations affected plume dispersion, yet were not considered in preoperational predictions. At Zion (submerged discharge) the Pritchard Model was successful only in special, simple cases (single-unit operation, no stratification, no reversing currents, no recirculation). Due to neglect of the above-mentioned phenomena, the model underpredicted plume size. At Quad Cities (submerged discharge), the undistorted laboratory model predicted plume dispersion for low river flows. These low flow predictions appear to be reasonable extrapolations of the field data acquired at higher flows
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Gelano, Tilayie Feto; Assefa, Nega; Bacha, Yadeta Dessie; Mahamed, Afendi Abdi; Roba, Kedir Teji; Hambisa, Mitiku Teshome
2018-02-12
Globally, the rapid development of mobile technology has created new ways of addressing public health challenges and shifted the paradigm of health care access and delivery. The primary aim of this study is to examine the effectiveness of Mobile-health on maternal health care service utilization in Eastern Ethiopia. Through, a cluster-randomized controlled trial, 640 participants will be selected based on their districts and respective health centers as the unit of randomization. All pregnant mothers who fulfill the inclusion criteria will be allocated to a mobile-phone-based intervention and existing standard of care or control with a 1:1 allocation ratio. The intervention consists of a series of 24 voice messages which will be sent every 2 weeks from the date of enrollment until the close-out time. The control group will receive existing standard of care without voice messages. Data related to outcome variables will be assessed at three phases of the data collection periods. The primary outcome measures will be the proportion of antenatal care visits and institutional delivery, whereas the secondary outcome measures will consist of the proportion of postnatal care visits and pregnancy outcomes. Risk ratios will be used to a measure the effect of intervention on the outcomes which will be estimated with 95% confidence interval and all the analyses will be done with consideration of clustering effect. This study should generate evidence on the effectiveness of mobile-phone-based voice messages for the early initiation of maternal health care service use and its uptake. It has been carefully designed with the assumption of obtaining higher levels of maternal health care service use among the treatment group as compared to the control. Pan African Clinical Trial Registry, www.panctr.org , ID: PACTR201704002216259 . Registered on 28 April 2017.
Simulating Urban Growth Using a Random Forest-Cellular Automata (RF-CA Model
Directory of Open Access Journals (Sweden)
Courage Kamusoko
2015-04-01
Full Text Available Sustainable urban planning and management require reliable land change models, which can be used to improve decision making. The objective of this study was to test a random forest-cellular automata (RF-CA model, which combines random forest (RF and cellular automata (CA models. The Kappa simulation (KSimulation, figure of merit, and components of agreement and disagreement statistics were used to validate the RF-CA model. Furthermore, the RF-CA model was compared with support vector machine cellular automata (SVM-CA and logistic regression cellular automata (LR-CA models. Results show that the RF-CA model outperformed the SVM-CA and LR-CA models. The RF-CA model had a Kappa simulation (KSimulation accuracy of 0.51 (with a figure of merit statistic of 47%, while SVM-CA and LR-CA models had a KSimulation accuracy of 0.39 and −0.22 (with figure of merit statistics of 39% and 6%, respectively. Generally, the RF-CA model was relatively accurate at allocating “non-built-up to built-up” changes as reflected by the correct “non-built-up to built-up” components of agreement of 15%. The performance of the RF-CA model was attributed to the relatively accurate RF transition potential maps. Therefore, this study highlights the potential of the RF-CA model for simulating urban growth.
DEFF Research Database (Denmark)
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...
Generalized Whittle-Matern random field as a model of correlated fluctuations
International Nuclear Information System (INIS)
Lim, S C; Teo, L P
2009-01-01
This paper considers a generalization of the Gaussian random field with covariance function of the Whittle-Matern family. Such a random field can be obtained as the solution to the fractional stochastic differential equation with two fractional orders. Asymptotic properties of the covariance functions belonging to this generalized Whittle-Matern family are studied, which are used to deduce the sample path properties of the random field. The Whittle-Matern field has been widely used in modeling geostatistical data such as sea beam data, wind speed, field temperature and soil data. In this paper we show that the generalized Whittle-Matern field provides a more flexible model for wind speed data
Universality for 1d Random Band Matrices: Sigma-Model Approximation
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
Random effects model for the reliability management of modules of a fighter aircraft
Energy Technology Data Exchange (ETDEWEB)
Sohn, So Young [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: sohns@yonsei.ac.kr; Yoon, Kyung Bok [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: ykb@yonsei.ac.kr; Chang, In Sang [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: isjang@yonsei.ac.kr
2006-04-15
The operational availability of fighter aircrafts plays an important role in the national defense. Low operational availability of fighter aircrafts can cause many problems and ROKA (Republic of Korea Airforce) needs proper strategies to improve the current practice of reliability management by accurately forecasting both MTBF (mean time between failure) and MTTR (mean time to repair). In this paper, we develop a random effects model to forecast both MTBF and MTTR of installed modules of fighter aircrafts based on their characteristics and operational conditions. Advantage of using such a random effects model is the ability of accommodating not only the individual characteristics of each module and operational conditions but also the uncertainty caused by random error that cannot be explained by them. Our study is expected to contribute to ROKA in improving operational availability of fighter aircrafts and establishing effective logistics management.
Modeling and understanding of effects of randomness in arrays of resonant meta-atoms
DEFF Research Database (Denmark)
Tretyakov, Sergei A.; Albooyeh, Mohammad; Alitalo, Pekka
2013-01-01
In this review presentation we will discuss approaches to modeling and understanding electromagnetic properties of 2D and 3D lattices of small resonant particles (meta-atoms) in transition from regular (periodic) to random (amorphous) states. Nanostructured metasurfaces (2D) and metamaterials (3D......) are arrangements of optically small but resonant particles (meta-atoms). We will present our results on analytical modeling of metasurfaces with periodical and random arrangements of electrically and magnetically resonant meta-atoms with identical or random sizes, both for the normal and oblique-angle excitations....... We show how the electromagnetic response of metasurfaces is related to the statistical parameters of the structure. Furthermore, we will discuss the phenomenon of anti-resonance in extracted effective parameters of metamaterials and clarify its relation to the periodicity (or amorphous nature...
Analytical connection between thresholds and immunization strategies of SIS model in random networks
Zhou, Ming-Yang; Xiong, Wen-Man; Liao, Hao; Wang, Tong; Wei, Zong-Wen; Fu, Zhong-Qian
2018-05-01
Devising effective strategies for hindering the propagation of viruses and protecting the population against epidemics is critical for public security and health. Despite a number of studies based on the susceptible-infected-susceptible (SIS) model devoted to this topic, we still lack a general framework to compare different immunization strategies in completely random networks. Here, we address this problem by suggesting a novel method based on heterogeneous mean-field theory for the SIS model. Our method builds the relationship between the thresholds and different immunization strategies in completely random networks. Besides, we provide an analytical argument that the targeted large-degree strategy achieves the best performance in random networks with arbitrary degree distribution. Moreover, the experimental results demonstrate the effectiveness of the proposed method in both artificial and real-world networks.
文, 勇起; BUN, Yuki
2013-01-01
In recent years, many flood damage and drought attributed to urbanization has occurred. At present infiltration facility is suggested for the solution of these problems. Based on this background, the purpose of this study is investigation of quantification of flood control and water utilization effect of rainfall infiltration facility by using water balance analysis model. Key Words : flood control, water utilization , rainfall infiltration facility
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
Modeling and design of light powered biomimicry micropump utilizing transporter proteins
Liu, Jin; Sze, Tsun-Kay Jackie; Dutta, Prashanta
2014-11-01
The creation of compact micropumps to provide steady flow has been an on-going challenge in the field of microfluidics. We present a mathematical model for a micropump utilizing Bacteriorhodopsin and sugar transporter proteins. This micropump utilizes transporter proteins as method to drive fluid flow by converting light energy into chemical potential. The fluid flow through a microchannel is simulated using the Nernst-Planck, Navier-Stokes, and continuity equations. Numerical results show that the micropump is capable of generating usable pressure. Designing parameters influencing the performance of the micropump are investigated including membrane fraction, lipid proton permeability, illumination, and channel height. The results show that there is a substantial membrane fraction region at which fluid flow is maximized. The use of lipids with low membrane proton permeability allows illumination to be used as a method to turn the pump on and off. This capability allows the micropump to be activated and shut off remotely without bulky support equipment. This modeling work provides new insights on mechanisms potentially useful for fluidic pumping in self-sustained bio-mimic microfluidic pumps. This work is supported in part by the National Science Fundation Grant CBET-1250107.
Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln
2014-01-01
Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.
Jacklin, Paul; Duckett, Jonathan; Renganathan, Arasee
2010-08-01
The purpose of this study was to assess cost utility of duloxetine versus tension-free vaginal tape (TVT) as a second-line treatment for urinary stress incontinence. A Markov model was used to compare the cost utility based on a 2-year follow-up period. Quality-adjusted life year (QALY) estimation was performed by assuming a disutility rate of 0.05. Under base-case assumptions, although duloxetine was a cheaper option, TVT gave a considerably higher QALY gain. When a longer follow-up period was considered, TVT had an incremental cost-effectiveness ratio (ICER) of pound 7,710 ($12,651) at 10 years. If the QALY gain from cure was 0.09, then the ICER for duloxetine and TVT would both fall within the indicative National Institute for Health and Clinical Excellence willingness to pay threshold at 2 years, but TVT would be the cost-effective option having extended dominance over duloxetine. This model suggests that TVT is a cost-effective treatment for stress incontinence.
Utilization of building information modeling in infrastructure’s design and construction
Zak, Josef; Macadam, Helen
2017-09-01
Building Information Modeling (BIM) is a concept that has gained its place in the design, construction and maintenance of buildings in Czech Republic during recent years. This paper deals with description of usage, applications and potential benefits and disadvantages connected with implementation of BIM principles in the preparation and construction of infrastructure projects. Part of the paper describes the status of BIM implementation in Czech Republic, and there is a review of several virtual design and construction practices in Czech Republic. Examples of best practice are presented from current infrastructure projects. The paper further summarizes experiences with new technologies gained from the application of BIM related workflows. The focus is on the BIM model utilization for the machine control systems on site, quality assurance, quality management and construction management.
Development of Nonlinear Flight Mechanical Model of High Aspect Ratio Light Utility Aircraft
Bahri, S.; Sasongko, R. A.
2018-04-01
The implementation of Flight Control Law (FCL) for Aircraft Electronic Flight Control System (EFCS) aims to reduce pilot workload, while can also enhance the control performance during missions that require long endurance flight and high accuracy maneuver. In the development of FCL, a quantitative representation of the aircraft dynamics is needed for describing the aircraft dynamics characteristic and for becoming the basis of the FCL design. Hence, a 6 Degree of Freedom nonlinear model of a light utility aircraft dynamics, also called the nonlinear Flight Mechanical Model (FMM), is constructed. This paper shows the construction of FMM from mathematical formulation, the architecture design of FMM, the trimming process and simulations. The verification of FMM is done by analysis of aircraft behaviour in selected trimmed conditions.
Directory of Open Access Journals (Sweden)
Xavier A. Harrison
2014-10-01
Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.
Directory of Open Access Journals (Sweden)
Niels Rathlev, MD
2016-01-01
Full Text Available Introduction: There is a paucity of literature supporting the use of electronic alerts for patients with high frequency emergency department (ED use. We sought to measure changes in opioid prescribing and administration practices, total charges and other resource utilization using electronic alerts to notify providers of an opioid-use care plan for high frequency ED patients. Methods: This was a randomized, non-blinded, two-group parallel design study of patients who had 1 opioid use disorder and 2 high frequency ED use. Three affiliated hospitals with identical electronic health records participated. Patients were randomized into “Care Plan” versus “Usual Care groups”. Between the years before and after randomization, we compared as primary outcomes the following: 1 opioids (morphine mg equivalents prescribed to patients upon discharge and administered to ED and inpatients; 2 total medical charges, and the numbers of; 3 ED visits, 4 ED visits with advanced radiologic imaging (computed tomography [CT] or magnetic resonance imaging [MRI] studies, and 5 inpatient admissions. Results: A total of 40 patients were enrolled. For ED and inpatients in the “Usual Care” group, the proportion of morphine mg equivalents received in the post-period compared with the pre-period was 15.7%, while in the “Care Plan” group the proportion received in the post-period compared with the pre-period was 4.5% (ratio=0.29, 95% CI [0.07-1.12]; p=0.07. For discharged patients in the “Usual Care” group, the proportion of morphine mg equivalents prescribed in the post-period compared with the pre-period was 25.7% while in the “Care Plan” group, the proportion prescribed in the post-period compared to the pre-period was 2.9%. The “Care Plan” group showed an 89% greater proportional change over the periods compared with the “Usual Care” group (ratio=0.11, 95% CI [0.01-0.092]; p=0.04. Care plans did not change the total charges, or, the numbers
Energy Technology Data Exchange (ETDEWEB)
Reiter, E.R.; Johnson, G.R.; Somervell, W.L. Jr.; Sparling, E.W.; Dreiseitly, E.; Macdonald, B.C.; McGuirk, J.P.; Starr, A.M.
1976-11-01
Research conducted between 1 July 1975 and 31 October 1976 is reported. A ''physical-adaptive'' model of the space-conditioning demand for energy and its response to changes in weather regimes was developed. This model includes parameters pertaining to engineering factors of building construction, to weather-related factors, and to socio-economic factors. Preliminary testing of several components of the model on the city of Greeley, Colorado, yielded most encouraging results. Other components, especially those pertaining to socio-economic factors, are still under development. Expansion of model applications to different types of structures and larger regions is presently underway. A CRT-display model for energy demand within the conterminous United States also has passed preliminary tests. A major effort was expended to obtain disaggregated data on energy use from utility companies throughout the United States. The study of atmospheric variability revealed that the 22- to 26-day vacillation in the potential and kinetic energy modes of the Northern Hemisphere is related to the behavior of the planetary long-waves, and that the midwinter dip in zonal available potential energy is reflected in the development of blocking highs. Attempts to classify weather patterns over the eastern and central United States have proceeded satisfactorily to the point where testing of our method for longer time periods appears desirable.
On the utility of land surface models for agricultural drought monitoring
Directory of Open Access Journals (Sweden)
W. T. Crow
2012-09-01
Full Text Available The lagged rank cross-correlation between model-derived root-zone soil moisture estimates and remotely sensed vegetation indices (VI is examined between January 2000 and December 2010 to quantify the skill of various soil moisture models for agricultural drought monitoring. Examined modeling strategies range from a simple antecedent precipitation index to the application of modern land surface models (LSMs based on complex water and energy balance formulations. A quasi-global evaluation of lagged VI/soil moisture cross-correlation suggests, when globally averaged across the entire annual cycle, soil moisture estimates obtained from complex LSMs provide little added skill (< 5% in relative terms in anticipating variations in vegetation condition relative to a simplified water accounting procedure based solely on observed precipitation. However, larger amounts of added skill (5–15% in relative terms can be identified when focusing exclusively on the extra-tropical growing season and/or utilizing soil moisture values acquired by averaging across a multi-model ensemble.
Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization
Energy Technology Data Exchange (ETDEWEB)
Yang, S.H.; Son, J.E.; Lee, S.D.; Cho, S.I.; Ashtiani-Araghi, A.; Rhee, J.Y.
2016-11-01
If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE), which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE. (Author)
Surplus thermal energy model of greenhouses and coefficient analysis for effective utilization
Directory of Open Access Journals (Sweden)
Seung-Hwan Yang
2016-03-01
Full Text Available If a greenhouse in the temperate and subtropical regions is maintained in a closed condition, the indoor temperature commonly exceeds that required for optimal plant growth, even in the cold season. This study considered this excess energy as surplus thermal energy (STE, which can be recovered, stored and used when heating is necessary. To use the STE economically and effectively, the amount of STE must be estimated before designing a utilization system. Therefore, this study proposed an STE model using energy balance equations for the three steps of the STE generation process. The coefficients in the model were determined by the results of previous research and experiments using the test greenhouse. The proposed STE model produced monthly errors of 17.9%, 10.4% and 7.4% for December, January and February, respectively. Furthermore, the effects of the coefficients on the model accuracy were revealed by the estimation error assessment and linear regression analysis through fixing dynamic coefficients. A sensitivity analysis of the model coefficients indicated that the coefficients have to be determined carefully. This study also provides effective ways to increase the amount of STE.
Ueckert, Sebastian; Plan, Elodie L; Ito, Kaori; Karlsson, Mats O; Corrigan, Brian; Hooker, Andrew C
2014-08-01
This work investigates improved utilization of ADAS-cog data (the primary outcome in Alzheimer's disease (AD) trials of mild and moderate AD) by combining pharmacometric modeling and item response theory (IRT). A baseline IRT model characterizing the ADAS-cog was built based on data from 2,744 individuals. Pharmacometric methods were used to extend the baseline IRT model to describe longitudinal ADAS-cog scores from an 18-month clinical study with 322 patients. Sensitivity of the ADAS-cog items in different patient populations as well as the power to detect a drug effect in relation to total score based methods were assessed with the IRT based model. IRT analysis was able to describe both total and item level baseline ADAS-cog data. Longitudinal data were also well described. Differences in the information content of the item level components could be quantitatively characterized and ranked for mild cognitively impairment and mild AD populations. Based on clinical trial simulations with a theoretical drug effect, the IRT method demonstrated a significantly higher power to detect drug effect compared to the traditional method of analysis. A combined framework of IRT and pharmacometric modeling permits a more effective and precise analysis than total score based methods and therefore increases the value of ADAS-cog data.
Brain in flames – animal models of psychosis: utility and limitations
Directory of Open Access Journals (Sweden)
Mattei D
2015-05-01
Full Text Available Daniele Mattei,1 Regina Schweibold,1,2 Susanne A Wolf1 1Department of Cellular Neuroscience, Max-Delbrueck-Center for Molecular Medicine, Berlin, Germany; 2Department of Neurosurgery, Helios Clinics, Berlin, Germany Abstract: The neurodevelopmental hypothesis of schizophrenia posits that schizophrenia is a psychopathological condition resulting from aberrations in neurodevelopmental processes caused by a combination of environmental and genetic factors which proceed long before the onset of clinical symptoms. Many studies discuss an immunological component in the onset and progression of schizophrenia. We here review studies utilizing animal models of schizophrenia with manipulations of genetic, pharmacologic, and immunological origin. We focus on the immunological component to bridge the studies in terms of evaluation and treatment options of negative, positive, and cognitive symptoms. Throughout the review we link certain aspects of each model to the situation in human schizophrenic patients. In conclusion we suggest a combination of existing models to better represent the human situation. Moreover, we emphasize that animal models represent defined single or multiple symptoms or hallmarks of a given disease. Keywords: inflammation, schizophrenia, microglia, animal models
93-106, 2015 93 Multilevel random effect and marginal models
African Journals Online (AJOL)
injected by the candidate vaccine have a lower or higher risk for the occurrence of ... outcome relationship and test whether subjects inject- ... contains an agent that resembles a disease-causing ... to have different random effect variability at each cat- ... In the marginal models settings, the responses are ... Behavior as usual.
Kottonau, Johannes
2011-01-01
Effectively teaching the concepts of osmosis to college-level students is a major obstacle in biological education. Therefore, a novel computer model is presented that allows students to observe the random nature of particle motion simultaneously with the seemingly directed net flow of water across a semipermeable membrane during osmotic…
First steps towards a state classification in the random-field Ising model
International Nuclear Information System (INIS)
Basso, Vittorio; Magni, Alessandro; Bertotti, Giorgio
2006-01-01
The properties of locally stable states of the random-field Ising model are studied. A map is defined for the dynamics driven by the field starting from a locally stable state. The fixed points of the map are connected with the limit hysteresis loops that appear in the classification of the states
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Theoretical model of the density of states of random binary alloys
International Nuclear Information System (INIS)
Zekri, N.; Brezini, A.
1991-09-01
A theoretical formulation of the density of states for random binary alloys is examined based on a mean field treatment. The present model includes both diagonal and off-diagonal disorder and also short-range order. Extensive results are reported for various concentrations and compared to other calculations. (author). 22 refs, 6 figs
A random regression model in analysis of litter size in pigs | Lukovi& ...
African Journals Online (AJOL)
Dispersion parameters for number of piglets born alive (NBA) were estimated using a random regression model (RRM). Two data sets of litter records from the Nemščak farm in Slovenia were used for analyses. The first dataset (DS1) included records from the first to the sixth parity. The second dataset (DS2) was extended ...
International Nuclear Information System (INIS)
Kaplan, T.; Gray, L.J.
1984-01-01
The self-consistent approximation of Kaplan, Leath, Gray, and Diehl is applied to models for substitutional random alloys with muffin-tin potentials. The particular advantage of this approximation is that, in addition to including cluster scattering, the muffin-tin potentials in the alloy can depend on the occupation of the surrounding sites (i.e., environmental disorder is included)
International Nuclear Information System (INIS)
Perez Curbelo, J.; Rosales, J.; Garcia, L.; Garcia, C.; Brayner, C.
2013-01-01
The pebble bed nuclear reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed type HTRs, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. Pebble bed reactors (PBR) offer the opportunity to meet the sustainability requirements, such as nuclear safety, economic competitiveness, proliferation resistance and a minimal production of radioactive waste. In order to simulate PBRs correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCPN code. The main reason is that there is a limited number of cells and surfaces to be defined. In this study, a computational tool which allows getting a new geometrical model of fuel pebbles for neutronic calculations with MCNPX code, was developed. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. Four proposed fuel pebble models were compared regarding their effective multiplication factor and energy liberation profiles. Such models are: Homogeneous Pebble, Five Zone Homogeneous Pebble, Detailed Geometry, and Randomly Detailed Geometry. (Author)
Integrals of random fields treated by the model correction factor method
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
Generalized Dynamic Panel Data Models with Random Effects for Cross-Section and Time
Mesters, G.; Koopman, S.J.
2014-01-01
An exact maximum likelihood method is developed for the estimation of parameters in a nonlinear non-Gaussian dynamic panel data model with unobserved random individual-specific and time-varying effects. We propose an estimation procedure based on the importance sampling technique. In particular, a
Reduction of the number of parameters needed for a polynomial random regression test-day model
Pool, M.H.; Meuwissen, T.H.E.
2000-01-01
Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation
DEFF Research Database (Denmark)
Petersen, Jørgen Holm
2016-01-01
This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...
Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines
Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred
2017-01-01
This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades
Stenzel, S.; Baumann-Stanzer, K.
2009-04-01
Dispersion modeling of accidental releases of toxic gases - Comparison of the models and their utility for the fire brigades. Sirma Stenzel, Kathrin Baumann-Stanzer In the case of accidental release of hazardous gases in the atmosphere, the emergency responders need a reliable and fast tool to assess the possible consequences and apply the optimal countermeasures. For hazard prediction and simulation of the hazard zones a number of air dispersion models are available. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for display the results, they are easy to use and can operate fast and effective during stress situations. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. There are also possibilities for model direct coupling to automatic meteorological stations, in order to avoid uncertainties in the model output due to insufficient or incorrect meteorological data. Another key problem in coping with accidental toxic release is the relative width spectrum of regulations and values, like IDLH, ERPG, AEGL, MAK etc. and the different criteria for their application. Since the particulate emergency responders and organizations require for their purposes unequal regulations and values, it is quite difficult to predict the individual hazard areas. There are a quite number of research studies and investigations coping with the problem, anyway the end decision is up to the authorities. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and
Computational model of precision grip in Parkinson’s disease: A Utility based approach
Directory of Open Access Journals (Sweden)
Ankur eGupta
2013-12-01
Full Text Available We propose a computational model of Precision Grip (PG performance in normal subjects and Parkinson’s Disease (PD patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Fellows et al 1998; Ingvarsson et al 1997. Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: 1 the sensory-motor loop component, and 2 the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the precision grip results from normal and PD patients accurately (Fellows et. al. 1998; Ingvarsson et. al. 1997. To our knowledge the model is the first model of precision grip in PD conditions.
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
International Nuclear Information System (INIS)
Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z
2015-01-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)
International Nuclear Information System (INIS)
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2018-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development
Energy Technology Data Exchange (ETDEWEB)
Mishchenko, Michael I., E-mail: michael.i.mishchenko@nasa.gov [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Dlugach, Janna M. [Main Astronomical Observatory of the National Academy of Sciences of Ukraine, 27 Zabolotny Str., 03680, Kyiv (Ukraine); Yurkin, Maxim A. [Voevodsky Institute of Chemical Kinetics and Combustion, SB RAS, Institutskaya str. 3, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, Pirogova 2, 630090 Novosibirsk (Russian Federation); Bi, Lei [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Cairns, Brian [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Liu, Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Columbia University, 2880 Broadway, New York, NY 10025 (United States); Panetta, R. Lee [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Travis, Larry D. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Yang, Ping [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Zakharova, Nadezhda T. [Trinnovim LLC, 2880 Broadway, New York, NY 10025 (United States)
2016-05-16
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of
Random regression models for daily feed intake in Danish Duroc pigs
DEFF Research Database (Denmark)
Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just
The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test station......-year-season, permanent, and animal genetic effects. The functional form was based on Legendre polynomials. A total of 64 models for random regressions were initially ranked by BIC to identify the approximate order for the Legendre polynomials using AI-REML. The parsimonious model included Legendre polynomials of 2nd...... order for genetic and permanent environmental curves and a heterogeneous residual variance, allowing the daily residual variance to change along the age trajectory due to scale effects. The parameters of the model were estimated in a Bayesian framework, using the RJMC module of the DMU package, where...
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Giessler, Mathias; Tränckner, Jens
2018-02-01
The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Willis, Erik A; Szabo-Reed, Amanda N; Ptomey, Lauren T; Steger, Felicia L; Honas, Jeffery J; Al-Hihi, Eyad M; Lee, Robert; Vansaghi, Lisa; Washburn, Richard A; Donnelly, Joseph E
2016-03-01
Management of obesity in the context of the primary care physician visit is of limited efficacy in part because of limited ability to engage participants in sustained behavior change between physician visits. Therefore, healthcare systems must find methods to address obesity that reach beyond the walls of clinics and hospitals and address the issues of lifestyle modification in a cost-conscious way. The dramatic increase in technology and online social networks may present healthcare providers with innovative ways to deliver weight management programs that could have an impact on health care at the population level. A randomized study will be conducted on 70 obese adults (BMI 30.0-45.0 kg/m(2)) to determine if weight loss (6 months) is equivalent between weight management interventions utilizing behavioral strategies by either a conference call or social media approach. The primary outcome, body weight, will be assessed at baseline and 6 months. Secondary outcomes including waist circumference, energy and macronutrient intake, and physical activity will be assessed on the same schedule. In addition, a cost analysis and process evaluation will be completed. Copyright © 2016 Elsevier Inc. All rights reserved.
Roth, Alexis M; Van Der Pol, Barbara; Fortenberry, J Dennis; Dodge, Brian; Reece, Michael; Certo, David; Zimet, Gregory D
2015-01-01
Epidemiologic data demonstrate that women involved with the criminal justice system in the United States are at high risk for sexually transmitted infections, including herpes simplex virus type 2 (HSV-2). Female defendants were recruited from a misdemeanor court to assess whether brief framed messages utilizing prospect theory could encourage testing for HSV-2. Participants were randomly assigned to a message condition (gain, loss, or control), completed an interviewer-administered survey assessing factors associated with antibody test uptake/refusal and were offered free point-of-care HSV-2 serologic testing. Although individuals in the loss-frame group accepted testing at the highest rate, an overall statistical difference in HSV-2 testing behavior by group (p ≤ .43) was not detected. The majority of the sample (74.6%) characterized receiving a serological test for HSV-2 as health affirming. However, this did not moderate the effect of the intervention nor was it significantly associated with test acceptance (p ≤ .82). Although the effects of message framing are subtle, the findings have important theoretical implications given the participants' characterization of HSV-2 screening as health affirming despite being a detection behavior. Implications of study results for health care providers interested in brief, low cost interventions are also explored.
A random point process model for the score in sport matches
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2009-01-01
Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf
Naryshkin, Roman; Davison, Matt
2009-01-01
This paper analyzes popular time-nonseparable utility functions that describe "habit formation" consumer preferences comparing current consumption with the time averaged past consumption of the same individual and "catching up with the Joneses" (CuJ) models comparing individual consumption with a cross-sectional average consumption level. Few of these models give reasonable optimum consumption time series. We introduce theoretically justified utility specifications leading to a plausible cons...
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
Model of sustainable utilization of organic solids waste in Cundinamarca, Colombia
Directory of Open Access Journals (Sweden)
Solanyi Castañeda Torres
2017-05-01
Full Text Available Introduction: This article considers a proposal of a model of use of organic solids waste for the department of Cundinamarca, which responds to the need for a tool to support decision-making for the planning and management of organic solids waste. Objective: To perform an approximation of a conceptual technical and mathematician optimization model to support decision-making in order to minimize environmental impacts. Materials and methods: A descriptive study was applied due to the fact that some fundamental characteristics of the studied homogeneous phenomenon are presented and it is also considered to be quasi experimental. The calculation of the model for plants of the department is based on three axes (environmental, economic and social, that are present in the general equation of optimization. Results: A model of harnessing organic solids waste in the techniques of biological treatment of composting aerobic and worm cultivation is obtained, optimizing the system with the emissions savings of greenhouse gases spread into the atmosphere, and in the reduction of the overall cost of final disposal of organic solids waste in sanitary landfill. Based on the economic principle of utility that determines the environmental feasibility and sustainability in the plants of harnessing organic solids waste to the department, organic fertilizers such as compost and humus capture carbon and nitrogen that reduce the tons of CO2.
Autcha Araveeporn
2013-01-01
This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...
International Nuclear Information System (INIS)
Liu Lianshou; Zhang Yang; Wu Yuanfang
1996-01-01
The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)
A theory of solving TAP equations for Ising models with general invariant random matrices
DEFF Research Database (Denmark)
Opper, Manfred; Çakmak, Burak; Winther, Ole
2016-01-01
We consider the problem of solving TAP mean field equations by iteration for Ising models with coupling matrices that are drawn at random from general invariant ensembles. We develop an analysis of iterative algorithms using a dynamical functional approach that in the thermodynamic limit yields...... the iteration dependent on a Gaussian distributed field only. The TAP magnetizations are stable fixed points if a de Almeida–Thouless stability criterion is fulfilled. We illustrate our method explicitly for coupling matrices drawn from the random orthogonal ensemble....
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
Vamos, C; Vereecken, H
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
International Nuclear Information System (INIS)
Vamos, Calin; Suciu, Nicolae; Vereecken, Harry
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested
GLOBAL RANDOM WALK SIMULATIONS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS OF PASSIVE TRANSPORT MODELS
Directory of Open Access Journals (Sweden)
Nicolae Suciu
2011-07-01
Full Text Available The Global Random Walk algorithm (GRW performs a simultaneoustracking on a fixed grid of huge numbers of particles at costscomparable to those of a single-trajectory simulation by the traditional Particle Tracking (PT approach. Statistical ensembles of GRW simulations of a typical advection-dispersion process in groundwater systems with randomly distributed spatial parameters are used to obtain reliable estimations of the input parameters for the upscaled transport model and of their correlations, input-output correlations, as well as full probability distributions of the input and output parameters.
Anderson localization through Polyakov loops: Lattice evidence and random matrix model
International Nuclear Information System (INIS)
Bruckmann, Falk; Schierenberg, Sebastian; Kovacs, Tamas G.
2011-01-01
We investigate low-lying fermion modes in SU(2) gauge theory at temperatures above the phase transition. Both staggered and overlap spectra reveal transitions from chaotic (random matrix) to integrable (Poissonian) behavior accompanied by an increasing localization of the eigenmodes. We show that the latter are trapped by local Polyakov loop fluctuations. Islands of such ''wrong'' Polyakov loops can therefore be viewed as defects leading to Anderson localization in gauge theories. We find strong similarities in the spatial profile of these localized staggered and overlap eigenmodes. We discuss possible interpretations of this finding and present a sparse random matrix model that reproduces these features.
Dispersion modeling of accidental releases of toxic gases - utility for the fire brigades.
Stenzel, S.; Baumann-Stanzer, K.
2009-09-01
Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios”), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Viennese fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program of the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. For the purpose of our study the following models were tested and compared: ALOHA (Areal Location of Hazardous atmosphere, EPA), MEMPLEX (Keudel av-Technik GmbH), Trace (Safer System), Breeze (Trinity Consulting), SAM (Engineering office Lohmeyer). A set of reference scenarios for Chlorine, Ammoniac, Butane and Petrol were proceed, with the models above, in order to predict and estimate the human exposure during the event. Furthermore, the application of the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) in case of toxic release was
International Nuclear Information System (INIS)
Kirsch, W.; Martinelli, F.
1981-01-01
After the derivation of weak conditions under which the potential for the Schroedinger operator is well defined the authers state an ergodicity assumption of this potential which ensures that the spectrum of this operator is a fixed non random set. Then random point interaction Hamiltonians are considered in this framework. Finally the authors consider a model where for sufficiently small fluctuations around the equilibrium positions a finite number of gaps appears. (HSI)
Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile
Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.
2012-09-01
Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.
Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.
Duffull, Stephen B; Hooker, Andrew C
2017-12-01
Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.
Camenzind, Paul A
2012-03-13
In spite of a detailed and nation-wide legislation frame, there exist large cantonal disparities in consumed quantities of health care services in Switzerland. In this study, the most important factors of influence causing these regional disparities are determined. The findings can also be productive for discussing the containment of health care consumption in other countries. Based on the literature, relevant factors that cause geographic disparities of quantities and costs in western health care systems are identified. Using a selected set of these factors, individual panel econometric models are calculated to explain the variation of the utilization in each of the six largest health care service groups (general practitioners, specialist doctors, hospital inpatient, hospital outpatient, medication, and nursing homes) in Swiss mandatory health insurance (MHI). The main data source is 'Datenpool santésuisse', a database of Swiss health insurers. For all six health care service groups, significant factors influencing the utilization frequency over time and across cantons are found. A greater supply of service providers tends to have strong interrelations with per capita consumption of MHI services. On the demand side, older populations and higher population densities represent the clearest driving factors. Strategies to contain consumption and costs in health care should include several elements. In the federalist Swiss system, the structure of regional health care supply seems to generate significant effects. However, the extent of driving factors on the demand side (e.g., social deprivation) or financing instruments (e.g., high deductibles) should also be considered.
Kagan, Jonathan M; Rosas, Scott; Trochim, William M K
2010-10-01
New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.
Context analysis for a new regulatory model for electric utilities in Brazil
International Nuclear Information System (INIS)
El Hage, Fabio S.; Rufín, Carlos
2016-01-01
This article examines what would have to change in the Brazilian regulatory framework in order to make utilities profit from energy efficiency and the integration of resources, instead of doing so from traditional consumption growth, as it happens at present. We argue that the Brazilian integrated electric sector resembles a common-pool resources problem, and as such it should incorporate, in addition to the centralized operation for power dispatch already in place, demand side management, behavioral strategies, and smart grids, attained through a new business and regulatory model for utilities. The paper proposes several measures to attain a more sustainable and productive electricity distribution industry: decoupling revenues from volumetric sales through a fixed maximum load fee, which would completely offset current disincentives for energy efficiency; the creation of a market for negawatts (saved megawatts) using the current Brazilian mechanism of public auctions for the acquisition of wholesale energy; and the integration of technologies, especially through the growth of unregulated products and services. Through these measures, we believe that Brazil could improve both energy security and overall sustainability of its power sector in the long run. - Highlights: • Necessary changes in the Brazilian regulatory framework towards energy efficiency. • How to incorporate demand side management, behavioral strategies, and smart grids. • Proposition of a market for negawatts at public auctions. • Measures to attain a more sustainable electricity distribution industry in Brazil.
New constraints on modelling the random magnetic field of the MW
Energy Technology Data Exchange (ETDEWEB)
Beck, Marcus C.; Nielaba, Peter [Department of Physics, University of Konstanz, Universitätsstr. 10, D-78457 Konstanz (Germany); Beck, Alexander M.; Dolag, Klaus [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany); Beck, Rainer [Max Planck Institute for Radioastronomy, Auf dem Hügel 69, D-53121 Bonn (Germany); Strong, Andrew W., E-mail: marcus.beck@uni-konstanz.de, E-mail: abeck@usm.uni-muenchen.de, E-mail: rbeck@mpifr-bonn.mpg.de, E-mail: dolag@usm.uni-muenchen.de, E-mail: aws@mpe.mpg.de, E-mail: peter.nielaba@uni-konstanz.de [Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, D-85748 Garching (Germany)
2016-05-01
We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson and Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of −2.8±7.1 rad/m{sup 2} and 04.4±11. rad/m{sup 2} for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.
Wanginingastuti Mutmainnah; Masao Furusho
2016-01-01
4M Overturned Pyramid (MOP) model is a new model, proposed by authors, to characterized MTS which is adopting epidemiological model that determines causes of accidents, including not only active failures but also latent failures and barriers. This model is still being developed. One of utilization of MOP model is characterizing accidents in MTS, i.e. collision in Indonesia and Japan that is written in this paper. The aim of this paper is to show the characteristics of ship collision accidents...
Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling
Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.
2015-01-01
Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless